Journal of Gerontological Nursing

Feature Article 

Why Not Just Ask the Resident? Refinement of a Preference Assessment Tool for Nursing Homes

Patricia Housen, PhD; George R. Shannon, PhD, MS; Barbara Simon, MA; Maria Orlando Edelen, PhD; Mary P. Cadogan, DrPH, RN, GNP-BC; Malia Jones, MPH; Joan Buchanan, PhD; Debra Saliba, MD, MPH

Abstract

This research evaluated a draft preference assessment tool (draft-PAT) designed to replace the current Customary Routine section of the Minimum Data Set (MDS) for nursing homes. The draft-PAT was tested with a sample of nursing home residents to evaluate survey-level administration time and noncompletion rates, as well as item-level nonresponse rates, response distributions, and test-retest reliability. Modifications to the draft-PAT were then retested with a subsample of residents. Completion times were brief (generally less than 10 minutes), and only a small percentage of residents were unable to complete the interview. Item-level nonresponse rates were low for the draft-PAT (0% to 8%) and even lower during retesting for items advanced to the national field trial (0% to 4%). Item response distributions indicated reasonable use of all options across both testing occasions, and item-level test-retest reliability was high. This study found that nursing home residents can reliably report their preferences. Eighteen items from the modified draft-PAT were advanced to the national field trial of the MDS 3.0. Inclusion of the PAT in the MDS revision underscores increased emphasis on including residents’ voice in the assessment process.

Abstract

This research evaluated a draft preference assessment tool (draft-PAT) designed to replace the current Customary Routine section of the Minimum Data Set (MDS) for nursing homes. The draft-PAT was tested with a sample of nursing home residents to evaluate survey-level administration time and noncompletion rates, as well as item-level nonresponse rates, response distributions, and test-retest reliability. Modifications to the draft-PAT were then retested with a subsample of residents. Completion times were brief (generally less than 10 minutes), and only a small percentage of residents were unable to complete the interview. Item-level nonresponse rates were low for the draft-PAT (0% to 8%) and even lower during retesting for items advanced to the national field trial (0% to 4%). Item response distributions indicated reasonable use of all options across both testing occasions, and item-level test-retest reliability was high. This study found that nursing home residents can reliably report their preferences. Eighteen items from the modified draft-PAT were advanced to the national field trial of the MDS 3.0. Inclusion of the PAT in the MDS revision underscores increased emphasis on including residents’ voice in the assessment process.

Dr. Housen and Dr. Shannon were Postdoctoral Fellows at the time the research was conducted, and Ms. Simon is Health Research Scientist/Survey Director, Veterans Administration Greater Los Angeles Healthcare System Geriatric Research Education and Clinical Center (VA GLAHS GRECC) and Veterans Affairs Health Services Research & Development (VA HSR&D) Center of Excellence for the Study of Health Care Provider Behavior, Sepulveda; Dr. Edelen is Behavioral Scientist and Psychometrician, RAND Corporation, Santa Monica; Dr. Cadogan is Adjunct Professor, School of Nursing, University of California, Los Angeles (UCLA), Ms. Jones is Project Administrator, RAND Corporation, Los Angeles, and Dr. Saliba is Anna and Harry Borun Chair in Geriatrics and Gerontology, UCLA, Research Physician, VA GLAHS GRECC and VA HSR&D Center of Excellence for the Study of Health Care Provider Behavior, Director, UCLA/Los Angeles Jewish Homes Borun Center for Gerontological Research, and Senior Natural Scientist, RAND Health, Los Angeles, California. Dr. Buchanan is Retired Lecturer in Health Care Policy, Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts.

The authors disclose that they have no significant financial interests in any product or class of products discussed directly or indirectly in this activity. This work was funded by the U.S. Department of Veterans Affairs (VA), Veterans Health Administration, VA HSR&D Service through the VA Greater Los Angeles HSR&D Center of Excellence (Project SDR 03-217) and the VA Office of Academic Affairs (TPP 65-002 and TPP 65-003). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the U.S. Department of Veterans Affairs.

Address correspondence to Patricia Housen, PhD, Research Associate, Partners in Care Foundation, 732 Mott Street, Suite 150, San Fernando, CA 91340; e-mail: patriciahousen@gmail.com.

Received: April 26, 2009
Accepted: July 21, 2009
Posted Online: November 06, 2009

Nursing home residents often have physical and cognitive limitations that force them to rely on others to implement their day-to-day preferences. As a result, residents and families value individualized environments that honor their preferences and promote autonomous choices (Kane & Kane, 2001; Lum, Kane, Cutler, & Yu, 2008; McVeigh, Jablonski, & Penrod, 2009; Saliba & Schnelle, 2002). Encouraging residents to express their preferences via routinely administered questionnaires could help nursing home providers obtain fundamental information by which to individualize nursing care and maximize resident autonomy and choice. Ideally, questionnaire items would foster increased interaction between staff and residents and be feasible to administer.

Although some empirically tested preference surveys for long-term care populations exist (Carpenter, Van Haitsma, Ruckdeschel, & Lawton, 2000; Kane & Degenholtz, 1997; Whitlatch, Feinberg, & Tucke, 2005), none are specifically designed for nursing home staff members to administer to residents. Challenges of developing a valid, reliable nursing home preference survey include concerns about the willingness of frail, dependent residents to discuss facility care (Lawton, 2001); resident accommodation to frailty and the nursing home environment (Housen et al., 2008); and increased administrative burden for nursing home care providers (Noelker, Ejaz, & Schur, 2000). In addition, it is often assumed that individuals with cognitive impairment are unable to effectively participate in direct interview surveys, although a growing literature base suggests otherwise (Brod, Stewart, Sands, & Walton, 1999; Feinberg & Whitlatch, 2001; Uman et al., 2000; Van Haitsma, 2000; Whitlatch et al., 2005).

Background

The research reported in this article was part of a larger national effort by the Centers for Medicare & Medicaid Services (CMS) and the U.S. Department of Veterans Affairs (VA) to improve the clinical relevance, reliability, and validity of key sections of the Minimum Data Set (MDS) for nursing homes. Our goal was to evaluate and refine a draft of an innovative direct interview resident preference questionnaire developed to replace the Customary Routine section of the MDS. The Customary Routine section is intended to facilitate individualized nursing care planning. However, providers and consumers have voiced concern that the current Customary Routine section (a 20-item checklist relating to daily routines in the year prior to nursing home admission) inadequately addresses residents’ current preferences or quality of life.

Development of a preference assessment tool (draft-PAT) was recommended by two expert panels convened to provide advice on MDS revisions. Because of the widespread use of the MDS and concerns about the feasibility and utility of embedding a resident preference interview within this assessment, we undertook a rigorous approach in developing the 24-item draft-PAT (Housen et al., 2008). Specifically, the initial item selection process applied the following criteria:

On the basis of these criteria, a pool of potential items was tested, winnowed, and modified by conducting cognitive interviews with nursing home residents. Cognitive interview techniques are used in survey development to understand how respondents interpret survey questions and to identify potential sources of response error (Forsyth & Lessler, 1991; Presser et al., 2004).

This article describes the further evolution of the draft-PAT. To refine the draft-PAT, we used an iterative process, conducting a two-phase pilot test. Analysis of item performance during the first round of regional pilot testing (Phase 1) led to further revisions, additional regional pilot testing, and reevaluation (Phase 2). In both pilot phases, we evaluated the feasibility of the draft-PAT administration time and noncompletion rates, as well as item-level nonresponse rates. Our goal was to forward an assessment tool for national testing that was clear and easily used to interview nursing home residents, and that minimized respondent burden.

Phase 1
Method

Participants

MDS coordinators at two VA nursing homes in southern California identified all residents scheduled for MDS assessment, regardless of cognitive status or physical condition (N = 198). For the last half of the sample (n = 69), we conducted a 72-hour retest. One resident declined to be interviewed again, thus the retest sample included 68 residents.

Measures

Patient preferences were measured with the draft-PAT, which consisted of 24 items mapped to seven quality-of-life domains. With a few noted exceptions, all questions used a common root (“While you are here in the nursing home, how important is it to you to…”) and a 4-point response scale (very important, somewhat important, not important, and important, but can’t do—no choice). The response option important, but can’t do—no choice was included based on cognitive interview findings. Specifically, we found that despite acknowledging a preference for a particular activity, many residents would rate the activity not important unless they were given an option that allowed them to express perceived barriers (Housen et al., 2008).

Cognitive ability was evaluated using the Cognitive Performance Scale (CPS) (Morris et al., 1994), a measure generated from five MDS items that correlates with the Mini-Mental State Examination (Folstein, Folstein, & McHugh, 1975). CPS scores are interpreted as: 0 to 1 = little or no cognitive impairment, 2 to 3 = mild to moderate cognitive impairment, and 4 to 6 = severe cognitive impairment (Gruber-Baldini, Zimmerman, Mortimore, & Magaziner, 2000; Morris et al., 1994). We classified respondents with CPS scores greater than 1 as having some cognitive impairment.

Data Collection

Two postdoctoral gerontology fellows (P.H. and G.R.S.) conducted the surveys. A visual aid was used to review the response categories with the resident during the interview. Residents could respond to questions verbally or point to answers on the visual aid. If residents could not respond after two readings of the item, or gave nonsensical answers, items were coded as nonresponsive. If residents were unable to provide answers for the first three items in a domain, the interviewers attempted items in two other domains—to determine whether nonresponse might be related to a particular content area—before terminating the interview.

In addition to recording responses, the interviewers recorded their observations of respondents’ behaviors and reactions to questions. In particular, hesitation, clarification requests, and expressed confusion were noted. Such observations are important as respondents will “satisfice,” or attempt to answer even when the question is unclear (Krosnick, 1991). As a result, nonresponse rates may not fully capture the ease of the response task or item clarity.

Analytic Approach

Data were analyzed using SPSS for Windows version 14.0. For the overall survey, we examined administration time and noncompletion rates. Due to sparse cell frequencies, we used Fisher’s exact test to evaluate the association between cognitive status and survey completion. We considered item nonresponse rates, item response distributions, and test-retest reliability to evaluate item-level performance.

Since facilities would follow up items of any positive importance, responses were recoded into dichotomous variables (0 = not important; 1 = very important, somewhat important, and important, but can’t do—no choice) for the reliability evaluation. We used percentage agreement, or concordance, to assess test-retest reliability of items. Concordance was calculated by dividing the number of consistent responses across the two occasions by the total number of responses. The interdisciplinary research team also reviewed the interviewers’ observations of resident behaviors and reactions. The draft-PAT was revised after considering all information from the survey and item-level evaluation.

Results

Sample Characteristics

Fourteen (7%) of the 198 residents identified by the MDS coordinators were not approached because they were rarely or never able to make themselves understood, and 13 (6.6%) residents could not be interviewed because they were discharged, were too ill, or died before the interview could be conducted. Thus, baseline interviews were initiated with 171 residents, 12 (7%) of whom declined participation and 11 (6.4%) of whom were discontinued because they provided nonresponsive answers.

The 148 participants with completed interviews ranged in age from 45 to 101 (median age = 76). Two thirds (n = 98) were non-Hispanic White, and all but 14 (9.8%) had completed high school. Ninety-one percent (n = 135) of respondents were men. The majority were long-stay residents: 21 (14.2%) had a length of stay between 91 days and 1 year, and 74 (50%) had a length of stay greater than 1 year (Table 1).

Demographic Characteristics of the Phase 1 and Phase 2 Samples

Table 1: Demographic Characteristics of the Phase 1 and Phase 2 Samples

The survey was successfully re-administered within 72 hours to 68 participants from the baseline sample. The subsample contained a larger proportion of women compared with the baseline (14.7% versus 8.8%). No significant differences were observed for age, race, education, cognition, or length of stay.

Survey-Level Performance

Interview Length. The average time to introduce the topic, explain the response scale, and complete the 24-item baseline survey was 9 minutes (SD = 4 minutes, range = 4 to 25 minutes). Approximately 70% of the interviews were completed in 10 minutes or less. Interviews truncated for nonresponsive answers also averaged 9 minutes (SD = 3 minutes, range = 4 to 18 minutes).

Cognitive Status of Respondents. Of the 198 residents initially identified by the MDS coordinators, approximately 86% (n = 31) of those with moderate cognitive impairment and 36% (n = 10) of those with severe cognitive impairment successfully completed the survey. (CPS data were not available for 10 of the 198 residents.) Of the 11 respondents who started but were unable to complete the interviews, 1 had no cognitive impairment, 5 had mild to moderate cognitive impairment, and 5 had severe cognitive impairment. These respondents were significantly more likely to have some degree of cognitive impairment (Fisher’s exact test, p < 0.001) than participants who completed the survey.

Use of Response Options Across the Survey. All of the available response options were endorsed by sizable numbers of residents. The very important response option was used most often. Overall, 44% of all responses fell into this category, followed by not important (28%), somewhat important (20%), and important, but can’t do—no choice (5%); 3% were nonresponses. The important, but can’t do—no choice option was endorsed a total of 145 times. It was used at least once to respond to each of the 24 items, and 46.6% (n = 69) of residents chose this answer at least once.

Item-Level Performance

Item-Level Nonresponse Rates. For the 148 residents who completed the interview, item-level nonresponse rates across all 24 items averaged 3% (range = 0% to 8%) (Table 2). The average nonresponse rates for the 7 items related to Autonomy (5%, range = 3% to 7%) were somewhat higher than those for the 8 items mapped to Meaningful Activities (2%, range = 0% to 4%).

Phase 1 Draft-Preference Assessment Tool Item-Level Response Rates (N = 148) and Test-Retest Agreement (N = 68)

Table 2: Phase 1 Draft-Preference Assessment Tool Item-Level Response Rates (N = 148) and Test-Retest Agreement (N = 68)

Use of Response Options at the Item Level. The least response variation was observed for the item, “How important is it to you to have a place to lock your things to keep them safe?” More than 4 in 5 residents (82.4%) rated this item very important. Approximately three fourths of residents rated two items (“How important is it to you to be offered an alcoholic beverage on occasion?” [73%], and “How important is it to you to use tobacco products?” [75%]) as not important.

Important, but can’t do—no choice was most frequently selected for items related to Meaningful Activities, accounting for half of the overall use of the response choice (73 of 145, 50.3%). This choice was provided most frequently in response to the following items:

  • “How important is it to you to do your favorite activities or hobbies?” (18 residents).
  • “How important is it to you to do things away from the nursing home ?” (14 residents).
  • “How important is it to you to do things with groups of people?” (11 residents).
  • “How important is it to you to have books, newspapers, and magazines to read?” (10 residents).

Interviewer Observations. During Phase 1 testing, interviewers observed increased difficulty (i.e., hesitancy, requests for clarification, expressions of uncertainty) surrounding the word hobbies in a Meaningful Activities item, as well as for items about spending time on appearances and following cultural or family customs. This difficulty was noted even among residents who ultimately selected a response. In addition, residents frequently expressed confusion about the intent or meaning of a question asking about the importance of having a private space for visits. Items that asked about the importance of choosing times for bathing or awakening were also frequently met with confusion, with residents who answered not important commenting that facility schedules were practical or “necessary for my care.”

Test-Retest Reliability. Test-retest concordance (percentage agreement) rates (Table 2) averaged 84% across all 24 items. Concordance ranged from 74% (“How important is it to you to listen to music you like in your room?”) to 96% (“How important is it to you to go outside when the weather is good?”), with only 6 items at less than 80% agreement.

Phase 2
Method

Phase 2 methods were similar to those for Phase 1, thus the section below emphasizes changes from Phase 1 to Phase 2.

Participants

As in Phase 1, nursing home residents scheduled for MDS assessment who were able to make themselves understood at least some of time were eligible for participation. During Phase 2, 56 residents were scheduled for MDS assessments. Interviews were initiated with 54 of these residents (2 refused) and completed with 50. For residents who were in both the Phase 1 and Phase 2 samples (n = 23), at least 4 months had elapsed between the two interviews. The refined draft-PAT was re-administered to residents available for re-interview within 72 hours of initial testing (n = 42).

Measures

The draft-PAT was modified based on results from Phase 1. We eliminated items that were unclear, increased respondent burden, had high nonresponse rates, or were highly correlated with another item in the same domain.

Three items from Phase 1 were dropped because of relatively high nonresponse rates and interviewer feedback that respondents expressed confusion: “choose the time you are awakened” (7%), “choose bath/shower time” (6%), and “follow cultural or family customs” (8%). Two pairs of items within the same domains (“choose what clothes to wear” and “spend time on appearance”; “use the phone in private” and “have a private space for visits”) were correlated at r ≥0.5. Within these pairs, the item for which residents demonstrated greatest hesitancy in answering (i.e., “spend time on appearance” and “have a private space for visits”) was eliminated. A final item, “use tobacco products,” was moved to another MDS section.

For 3 items that interviewers observed were comparatively difficult for residents to answer (“do favorite activities or hobbies,” “stay up past 8 p.m.,” and “be offered an alcoholic beverage”), two alternative versions were developed and cognitively tested (Table 3, Modified Items section). Thus the Phase 2 draft-PAT included 21 items: 15 items from the Phase 1 draft-PAT, plus two alternate versions of each of the three problematic items.

Phase 2 Draft-Preference Assessment Tool Item-Level Response Rates (N = 50) and Test-Retest Agreement (N = 42)

Table 3: Phase 2 Draft-Preference Assessment Tool Item-Level Response Rates (N = 50) and Test-Retest Agreement (N = 42)

In addition, the Phase 2 pilot used a modified response scale. Due to results from the cognitive interviews indicating inconsistent interpretation of the somewhat important option in the 4-point scale used in Phase 1 (Housen et al., 2008), a 5-point importance scale (very important, somewhat important, not very important, not important at all, and important, but can’t do—no choice) was adopted.

Data Collection

Phase 2 interviews were conducted by a social worker, a health services researcher, and an experienced research assistant. Administration protocols were similar to those used in Phase 1.

Analytic Approach

The goal of this phase was to test the new response scale and determine whether performance of the three modified items improved. Analyses were similar to those conducted in Phase 1.

Results

Sample Characteristics

The 50 residents scheduled for MDS assessment had a median age of 78 (age range = 39 to 92). The majority (n = 45, 90%) were men, non-Hispanic White (n = 35, 70%), and had completed high school (n = 49, 98%) (Table 1).

Survey-Level Performance

Interview Length. It took an average of 7 minutes (SD = 5 minutes, range = 3 to 35 minutes) to explain and administer the survey. Approximately 90% of the interviews were completed in 10 minutes or less.

Survey Noncompletion Rates. Of the 54 initiated interviews, only 4 were not completed. As with Phase 1, those residents unable to complete the interview were more likely to have cognitive impairment (Fisher’s exact test, p = 0.002): 3 had severe cognitive impairment, and 1 had mild cognitive impairment.

Item-Level Performance

Item-Level Nonresponse Rates.Table 3 shows nonresponse rates for the 50 participants who completed the baseline interview. Item-level nonresponse rates for items advanced to the national field trial were low, ranging from 0% to 4%.

Use of Response Options. Residents endorsed the very important response option most often. Overall, 39% of all responses fell into this category, followed by somewhat important (24%), not very important (19%), not important at all (11%), and important, but can’t do—no choice (4%); 3% were nonresponses. Half of the respondents (n = 25) endorsed the important, but can’t do—no choice response option at least once.

Interviewer Observations. Interviewer observations for the new/revised items and response scale were mixed. We tested two alternative versions of the activity item (“do favorite activities” and “do any activities”). Interviewers noted no difficulty for residents when the item addressed a “favorite” activity. However, when the item asked about “any” activities, several residents expressed confusion or asked a clarifying question. Interviewers noted continued confusion with both modified versions of the “stay up past 8 p.m.” item and that neither modified alcohol item diminished residents’ concerns about endorsing alcohol use. The interviewers also noted the revised 5-point scale appeared to ease the response task for participants.

Test-Retest Reliability. Test-retest concordance (percentage agreement) across testing occasions over 24 to 72 hours was slightly higher for the 15 unchanged items (87%) using the 5-point scale than during Phase 1 testing using the 4-point scale (84%) (Table 3). Concordance ranged from 76% (“do things with groups of people”) to 95% (“go outside when the weather is good”) for the 15 unchanged items, with only 2 items at less than 80% agreement (“do things with groups of people” and “participate in religious services”), compared with 6 items with less than 80% agreement in Phase 1. Test-retest performance declined slightly for the modified items asking about resident preferences for activities (82% concordance for original item versus 80% and 73% for the alternate versions), bedtime (85% for original versus 68% and 78% for alternates), and alcohol (80% for original versus 78% and 83% for alternates).

Discussion

We aimed to develop a preference assessment tool that might help nursing home staff systematically appraise resident preferences during MDS assessment. The iterative development process used in this study demonstrates that although residents can respond to preference items using an importance scale, some items may be more difficult to answer than others. Overall, the draft-PAT performed well in both phases of pilot testing. Most residents were amenable to being surveyed. In addition, item-level nonresponse rates were low, and the amount of time required to complete the draft-PAT was acceptable, averaging 7 minutes for the second version. Results also demonstrated that the draft-PAT is feasible for most residents with no, mild, or moderate cognitive impairment.

Although noncompletion rates were associated with cognitive status, the vast majority of residents completed the interview, including some residents with severe cognitive impairment. In addition, item-level concordance rates (percentage agreement) were high, suggesting that residents are able to report reliably about their routine preferences over a period of several days. For nurses who want to adopt published surveys, the evolution of the PAT demonstrates the importance of careful iterative testing to ensure item wording and responses maximize survey performance in their target population.

The team ultimately recommended 18 items advance for testing as part of a large national CMS and VA field trial of proposed MDS revisions. In response to recommendations from consumer advocates, the subset included a revised item assessing preferences for alcohol use (“If your doctor approves, would you like to be offered alcohol on occasion at meals or social events?” Response choices: no, yes, and yes, but can’t do—no choice). We selected the original version of the bedtime item, which performed better than the alternate versions tested, and chose the simpler version of the item about favorite activities tested in Phase 2 (“do favorite activities”).

Given its acceptable performance during Phase 2 pilot testing, the 5-point importance scale was also advanced to the national field trial. The response option important, but can’t do—no choice allows nursing staff to identify areas where residents feel choices are constrained. The introduction of a metric with more choices did not decrease completion rates or adversely affect administration time. Further, interviewers observed that residents had greater ease using the revised scale, indicating it may have more closely matched residents’ experiences and therefore decreased the effort required to select a response.

Low item-level nonresponse rates, as well as participants’ use of all five response options, suggested nursing home residents understood and were able to use the scale to answer survey items. In addition, while the important, but can’t do—no choice response was endorsed less frequently than the other options, it is notable that approximately half of the residents in the Phase 1 (46.6%) and Phase 2 (50%) samples chose this answer at least once.

Limitations and Suggestions for Future Research

Participants were recruited from a single geographical area, and most were men. It is important to demonstrate that selected items work well in other regions with both genders. Next steps will include the national trial of the draft-PAT in a sample of male and female nursing home residents living in geographically diverse VA and non-VA facilities.

Since the ultimate goal of any preference survey is to improve quality of life, it is also important that future studies test whether the interview initiates greater interaction between caregivers and residents, empowers residents to state preferences, and facilitates individualized nursing care planning. Further, while many residents with cognitive impairment will be able to participate in the PAT, other approaches and information sources will need to be used with those who are unable to communicate or to complete the interview.

Conclusion and Implications

The current MDS 2.0 Customary Routine section encourages but does not require direct resident interview, nor does it provide a tested and refined set of interview items to facilitate staff elicitation of preferences. Development of the draft-PAT as part of the national MDS revision underscores a growing emphasis on including the residents’ voice in the assessment process. Survey items included in the draft-PAT were chosen for their relevance to the resident experience, and the 5-point response set that was ultimately selected provides information about physical and environmental barriers as residents perceive them. Information gained from surveying residents using the PAT could be applied to designing interventions at the individual level or for planning routine care services and activity schedules around preferences that are highly rated at the aggregate level (Whitlatch, 2006). In either case, the potential exists to improve resident satisfaction and nursing home quality of life.

References

  • Brod, M., Stewart, A.L., Sands, L. & Walton, P. (1999). Conceptualization and measurement of quality of life in dementia: The Dementia Quality of Life instrument (DQoL). The Gerontologist, 39, 25–35.
  • Carpenter, B.D., Van Haitsma, K., Ruckdeschel, K. & Lawton, M.P. (2000). The psychosocial preferences of older adults: A pilot examination of content and structure. The Gerontologist, 40, 335–348.
  • Feinberg, L.F. & Whitlatch, C.J. (2001). Are persons with cognitive impairment able to state consistent choices?The Gerontologist, 41, 374–382.
  • Folstein, M.F., Folstein, S.E. & McHugh, P.R. (1975). “Mini-mental state.” A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189–198. doi:10.1016/0022-3956(75)90026-6 [CrossRef]
  • Forsyth, B.H. & Lessler, J.T. (1991). Cognitive laboratory methods: A taxonomy. In Biemer, P.P., Groves, R.M., Lyberg, L.E., Mathiowetz, N.A. & Sudman, S. (Eds.), Measurement errors in surveys (pp. 393–418). New York: Wiley and Sons.
  • Gruber-Baldini, A.L., Zimmerman, S.I, Mortimore, E. & Magaziner, J. (2000). The validity of the minimum data set in measuring the cognitive impairment of persons admitted to nursing homes. Journal of the American Geriatrics Society, 48, 1601–1606.
  • Housen, P., Shannon, G. & Saliba, D. (2005, November). Nursing home resident preferences for daily activities: The ombudsman perspective. Paper presented at the 58th annual meeting of the Gerontological Society of America. , Orlando, FL. .
  • Housen, P., Shannon, G.R., Simon, B., Edelen, M.O., Cadogan, M.P. & Sohn, L. et al. (2008). What the resident meant to say: Use of cognitive interviewing techniques to develop questionnaires for nursing home residents. The Gerontologist, 48, 158–169.
  • Kane, R.A. & Degenholtz, H. (1997, Spring). Assessing values and preferences: Should we, can we?Generations, pp. 19–24.
  • Kane, R.A., Degenholtz, H.B. & Kane, R.L. (1999). Adding values: An experiment in systematic attention to values and preferences of community long-term care clients. Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 54, S109–S119.
  • Kane, R.L. & Kane, R.A. (2001). What older people want from long-term care, and how they can get it. Health Affairs, 20. Retrieved from http://content.healthaffairs.org/cgi/reprint/20/6/114.pdf doi:10.1377/hlthaff.20.6.114 [CrossRef]
  • Kane, R.A., Kling, K.C., Bershadsky, B., Kane, R.L., Giles, K. & Degenholtz, H.B. et al. (2003). Quality of life measures for nursing home residents. Journals of Gerontology. Series A, Biological Sciences and Medical Sciences, 58, 240–248.
  • Krosnick, J.A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5, 213–236. doi:10.1002/acp.2350050305 [CrossRef]
  • Lawton, M.P. (2001). Quality of care and quality of life in dementia care units. In Noelker, L.S. & Harel, Z. (Eds.), Linking quality of long-term care and quality of life (pp. 136–161). New York: Springer.
  • Lum, T.Y., Kane, R.A., Cutler, L.J. & Yu, T.C. (2008). Effects of Green House nursing homes on residents’ families. Health Care Financing Review, 30(2), 35–51.
  • McVeigh, S.E., Jablonski, R.A. & Penrod, J. (2009). Strategies for improving family satisfaction with long-term care facilities: Direct care and family-staff interactions. Annals of Long-Term Care, 17(4), 25–28.
  • Morris, J.N., Fries, B.E., Mehr, D.R., Hawes, C., Phillips, C. & Mor, V. et al. (1994). MDS cognitive performance scale. Journal of Gerontology, 49, M174–M182.
  • National Citizen’s Coalition for Nursing Home Reform. (1985). A consumer perspective on quality care: The residents’ point of view. Washington, DC: Author.
  • Noelker, L.S., Ejaz, F.K. & Schur, D. (2000). Prevalence and problems in the use of satisfaction surveys: Results from research on Ohio nursing homes. In Cohen-Mansfield, J., Ejaz, F.K. & Werner, P. (Eds.), Satisfaction surveys in long-term care (pp. 101–121). New York: Springer.
  • Polisher Research Institute. (n.d.). Preferences for Everyday Living Questionnaire. Retrieved from http://www.abramsoncenter.org/PRI/documents/PELIQuestionnaire.pdf
  • Presser, S., Couper, M.P., Lessler, J.T., Martin, E., Martin, J. & Rothgeb, J.M. et al. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly, 68, 109–130. doi:10.1093/poq/nfh008 [CrossRef]
  • Rantz, M.J., Zwygart-Stauffacher, M., Popejoy, L., Grando, V.T., Mehr, D.R. & Hicks, L.L. et al. (1999). Nursing home care quality: A multidimensional theoretical model integrating the views of consumers and providers. Journal of Nursing Care Quality, 14, 16–37.
  • Saliba, D. & Schnelle, J.F. (2002). Indicators of the quality of nursing home residential care. Journal of the American Geriatrics Society, 50, 1421–1430. doi:10.1046/j.1532-5415.2002.50366.x [CrossRef]
  • Uman, G.C., Hocevar, D., Urman, H.N., Young, R., Hirsch, M. & Kohler, S. (2000). Satisfaction surveys with the cognitively impaired. In Cohen-Mansfield, J., Ejaz, F.K. & Werner, P. (Eds.), Satisfaction surveys in long-term care (pp. 166–168). New York: Springer.
  • Van Haitsma, K. (2000). The assessment and integration of preferences into care practices for persons with dementia residing in the nursing home. In Rubenstein, R.L., Moss, M. & Kleban, M. (Eds.), The many dimensions of aging (pp. 143–163). New York: Springer.
  • Whitlatch, C.J. (2006, November). Discussion: Innovations in assessing psychosocial preferences and delivering individualized care. In Van Haitsma, K. & Whitlatch, C.J. (Chairs), Individualizing care to frail elders: Conceptual and measurement issues in assessing psychosocial preferences. Symposium conducted at the meeting of the Gerontological Society of America. , Dallas, TX.
  • Whitlatch, C.J., Feinberg, L.F. & Tucke, S.S. (2005). Measuring the values and preferences for everyday care of persons with cognitive impairment and their family caregivers. The Gerontologist, 45, 370–380.

Demographic Characteristics of the Phase 1 and Phase 2 Samples

Phase 1Phase 2
CharacteristicBaseline (%) (n= 148)Retest (%) (n= 68)Baseline (%) (n= 50)Retest (%) (n= 42)
Age
  Younger than 6534 (23)14 (20.6)14 (28)10 (23.8)
  65 to 7435 (23.6)17 (25)5 (10)4 (9.5)
  75 to 8456 (37.8)23 (33.8)24 (48)22 (52.4)
  85 and older23 (15.5)14 (20.6)7 (14)6 (14.3)
Gender
  Men135 (91.2)58 (85.3)45 (90)37 (88.1)
  Women13 (8.8)10 (14.7)5 (10)5 (11.9)
Race
  Non-Hispanic White98 (66.2)45 (66.2)35 (70)31 (73.8)
  Non-Hispanic Black36 (24.3)16 (23.5)8 (16)6 (14.3)
  Hispanic11 (7.4)6 (8.8)4 (8)2 (4.8)
  Asian3 (2)1 (1.5)3 (6)3 (7.1)
Length of stay
  90 or more days53 (35.8)21 (30.9)17 (34)14 (33.3)
  91 days to 1 year21 (14.2)9 (13.2)10 (20)8 (19)
  366 days to 3 years42 (28.4)20 (29.4)9 (18)8 (19)
  More than 3 years32 (21.6)18 (26.5)14 (28)12 (28.6)
Educationa
  Less than high school14 (9.8)4 (6.1)1 (2)1 (2.4)
  High school graduate58 (40.8)32 (49.2)18 (36)17 (40.5)
  Trade school/some college50 (35.2)20 (30.8)23 (46)17 (40.5)
  Bachelor’s degree14 (9.9)5 (7.7)5 (10)5 (11.9)
  Graduate degree6 (4.2)4 (6.2)3 (6)2 (4.8)
Cognitive Performance Scaleb
  No cognitive impairment (score 0 to 1)105 (71.9)50 (74.6)38 (76)32 (76.2)
  Mild to moderate cognitive impairment (score 2 to 3)31 (21.2)14 (20.9)9 (18)9 (21.4)
  Severe cognitive impairment (score 4 to 6)10 (6.9)3 (4.5)3 (6)1 (2.4)

Phase 1 Draft-Preference Assessment Tool Item-Level Response Rates (N = 148) and Test-Retest Agreement (N = 68)

Use of Importance Ratings (%)
Abbreviated Item Content by Quality of Life DomainVery ImportantSomewhat ImportantNot ImportantImportant, But Can’t Do—No ChoiceNonresponse (%)Test-Retest Agreement (%)
MEANINGFUL ACTIVITIES
Have books, newspapers, and magazines to read48.616.226.46.82.177
Listen to music in your room41.228.422.34.14.174
Be around animals19.618.256.14.71.478
Keep up with the news53.432.411.51.41.493
Do things with groups of people20.927.742.67.41.479
Do favorite activities or hobbies42.618.923.612.22.782
Do things away from the nursing home39.224.324.39.52.783
Go outside when the weather is good69.617.69.53.4096
AUTONOMY
Stay up past 8 p.m.a45.921.625.724.885
Choose the time you are awakeneda45.314.928.44.76.880
Choose bath/shower type62.215.514.24.73.482
Choose bath/shower time42.620.326.44.76.183
Choose what clothes to wear44.62720.925.486
Spend time on appearance49.32320.34.13.484
Use tobacco products12.88.8750.72.791
FUNCTIONAL COMPETENCE
Take care of your personal belongings70.316.95.45.4291
FOOD ENJOYMENT
Have snacks available33.82536.522.779
Be offered an alcoholic beverage6.116.9732280
SPIRITUAL WELL-BEING
Participate in religious services49.324.32321.483
Follow cultural or family customs33.820.933.83.48.176
PRIVACY
Use the phone in private51.416.924.34.72.788
Have a private space for visits40.527.727.70.73.484
SECURITY
Have family in care discussions47.315.529.14.14.185
Have a place to lock your things82.47.48.11.40.791

Phase 2 Draft-Preference Assessment Tool Item-Level Response Rates (N = 50) and Test-Retest Agreement (N = 42)

Use of Importance Ratings (%)
Abbreviated Item ContentVerySomewhat ImportantNot Very ImportantNot Important at AllImportant, But Can’t Do—No ChoiceNon-response (%)Test-Retest Agreement (%)
UNCHANGED ITEMS (ABBREVIATED)
Have books, newspapers, and magazines to read3226181210280
Listen to music in your room34461080288
Be around animals141820422483
Keep up with the news5436260293
Do things with groups of people163018304276
Do things away from the nursing home3622121610483
Go outside when the weather is good6424642095
Choose bath/shower type541661210290
Choose what clothes to wear382420142283
Take care of your belongings6818442493
Have snacks available481212262091
Participate in religious services382014168478
Use the phone in private421414206483
Have family in care discussions46228146490
Have a place to lock your things68122142293
MODIFIED ITEMS
Do favorite activities342881612280
Do any activities26446164473
Stay up past 8 p.m.32348222268
Have an alcoholic beverage28612724478
MODIFIED ITEMS: ALTERNATE STEMSYesNo
“Before coming to this nursing home, did you…”
Stay up past 8 p.m.8014678
Have alcohol on occasion3464283

Preference Assessment Tool

Housen, P., Shannon, G.R., Simon, B., Edelen, M.O., Cadogan, M.P. & Jones, M. et al. (2009). Why Not Just Ask the Resident? Refinement of a Preference Assessment Tool for Nursing Homes. Journal of Gerontological Nursing, 35(11), 40–49.

  1. A brief new survey is intended to help nursing staff systematically interview nursing home residents about their preferences for daily routines and activities.

  2. Interview items should be carefully tested before implementation. Our research shows how development and pilot testing allowed us to identify needed changes in items and response choices.

  3. Development of the new survey included cognitive interviews to ensure questions were relevant to residents’ experience in the nursing home and were easily understood and answered.

  4. Most residents with no, mild, or moderate cognitive impairment, and some residents with severe cognitive impairment, were able to successfully complete the new survey.

Authors

Dr. Housen and Dr. Shannon were Postdoctoral Fellows at the time the research was conducted, and Ms. Simon is Health Research Scientist/Survey Director, Veterans Administration Greater Los Angeles Healthcare System Geriatric Research Education and Clinical Center (VA GLAHS GRECC) and Veterans Affairs Health Services Research & Development (VA HSR&D) Center of Excellence for the Study of Health Care Provider Behavior, Sepulveda; Dr. Edelen is Behavioral Scientist and Psychometrician, RAND Corporation, Santa Monica; Dr. Cadogan is Adjunct Professor, School of Nursing, University of California, Los Angeles (UCLA), Ms. Jones is Project Administrator, RAND Corporation, Los Angeles, and Dr. Saliba is Anna and Harry Borun Chair in Geriatrics and Gerontology, UCLA, Research Physician, VA GLAHS GRECC and VA HSR&D Center of Excellence for the Study of Health Care Provider Behavior, Director, UCLA/Los Angeles Jewish Homes Borun Center for Gerontological Research, and Senior Natural Scientist, RAND Health, Los Angeles, California. Dr. Buchanan is Retired Lecturer in Health Care Policy, Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts.

The authors disclose that they have no significant financial interests in any product or class of products discussed directly or indirectly in this activity. This work was funded by the U.S. Department of Veterans Affairs (VA), Veterans Health Administration, VA HSR&D Service through the VA Greater Los Angeles HSR&D Center of Excellence (Project SDR 03-217) and the VA Office of Academic Affairs (TPP 65-002 and TPP 65-003). The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the U.S. Department of Veterans Affairs.

Address correspondence to Patricia Housen, PhD, Research Associate, Partners in Care Foundation, 732 Mott Street, Suite 150, San Fernando, CA 91340; e-mail: .patriciahousen@gmail.com

10.3928/00989134-20091001-01

Sign up to receive

Journal E-contents