Journal of Gerontological Nursing

The articles prior to January 2012 are part of the back file collection and are not available with a current paid subscription. To access the article, you may purchase it or purchase the complete back file collection here

Research Brief 

Advantages and Disadvantages of Using MDS Data in Nursing Research

Juh Hyun Shin, PhD, RN; Yvonne Scherer, EdD, RN

Abstract

The purpose of this article is to review the advantages and disadvantages of using Minimum Data Set (MDS) data for nursing research, the psychometric characteristics of the MDS 2.0, and threats to the validity of its psychometric characteristics. The defined major advantages of the MDS are: (a) it provides continuous evaluation of residents’ health and functional status, and (b) it enables facility evaluation at the nursing home level. The reviewed articles from the literature report that MDS 2.0 has moderate to moderate/high validity and reliability; however, the psychometric properties of MDS 2.0 are still controversial, mainly because the instrument has failed to identify depression in older nursing home residents.

Abstract

The purpose of this article is to review the advantages and disadvantages of using Minimum Data Set (MDS) data for nursing research, the psychometric characteristics of the MDS 2.0, and threats to the validity of its psychometric characteristics. The defined major advantages of the MDS are: (a) it provides continuous evaluation of residents’ health and functional status, and (b) it enables facility evaluation at the nursing home level. The reviewed articles from the literature report that MDS 2.0 has moderate to moderate/high validity and reliability; however, the psychometric properties of MDS 2.0 are still controversial, mainly because the instrument has failed to identify depression in older nursing home residents.

Dr. Shin is Research Assistant Professor, and Dr. Scherer is Associate Professor, School of Nursing, State University of New York at Buffalo, Buffalo, New York.

Address correspondence to Juh Hyun Shin, PhD, RN, Research Assistant Professor, School of Nursing, State University of New York at Buffalo, 823 Kimball Tower, 3435 Main Street, Buffalo, NY 14214–3079; e-mail: iamjoohyun@gmail.com.

© 2004 Mark Shaver/Laughing Stock

© 2004 Mark Shaver/Laughing Stock

The Minimum Data Set (MDS) was developed to offer a comprehensive assessment of nursing home (NH) residents because of concerns expressed in the Omnibus Budget Reconciliation Act of 1987 (OBRA ’87) about the quality of care in NHs (Hawes et al., 1995; Mukamel & Spector, 2003). The Nursing Home Reform Act was passed as a part of OBRA ’87 to improve quality of care through regulation and inspections (Mukamel & Spector, 2003). OBRA ’87 required development of standardized assessment of NH residents (Institute of Medicine [IOM], 2001). Consequently, in 1998, the Resident Assessment Instrument (RAI) was implemented nationally through the Centers for Medicare & Medicaid Services’ (CMS) Health Care Quality Improvement Program for NHs (Mukamel & Spector, 2003). The RAI has three components: (a) the MDS, which is used as a preliminary screen to recognize potential resident problems and strengths, (b) resident assessment protocols, which are organized, problem-oriented frameworks for MDS information and investigation of additional clinical information about residents, and (c) usage guidelines (CMS, 2002/2008). The purpose of this article is to review the advantages and disadvantages of using MDS data for nursing research, the psychometric characteristics of the MDS 2.0, and threats to the validity of its psychometric characteristics.

The MDS is a 284-item instrument devised to evaluate the medical, mental, and social characteristics of NH residents (Lawton et al., 1998). The MDS was constructed, tested, and modified through consultation by and suggestions from professionals, including researchers and government regulators (IOM, 2001). By the end of 1990, the MDS was implemented in all U.S. NHs certified by CMS (IOM, 2001). The MDS measures residents’ activities of daily living (ADLs), as well as changes in these activities (IOM, 2001). The MDS is divided into 15 sections: cognitive patterns, communication and hearing patterns, vision patterns, physical functioning and structural problems, continence, psychosocial well-being, mood and behavior patterns, activity-pursuit patterns, disease diagnoses, health conditions, nutritional status, oral and dental status, skin condition, medication use, and special treatments and procedures (CMS, 2002/2008). OBRA ’87 provisions mandated development of the MDS and the routine use of the electronic MDS for all NH residents and required that quality assurance and assessment processes be used in all NHs to improve quality of care (Rantz et al., 2000). The major advantage of the MDS is that it is a very good source of research data, especially for measuring quality in long-term care settings, thus making the MDS data set a rich source of information on NH residents. Version 3.0 of the MDS was proposed to the CMS for validation in April 2003 and will be updated from MDS 2.0 (Anderson, Connolly, Pratt, & Shapiro, 2003). Currently, version 2.0 of the MDS is being used across the United States, but the updated version is still under development and is scheduled to be implemented in October 2009 (Anderson et al., 2003).

Advantages and Disadvantages of Using MDS Data

The major advantage of the MDS is that it requires NH staff to thoroughly assess residents’ health and functional status on a regular and continuous basis (Hendrix, Sakauye, Karabatsos, & Daigle, 2003; Won, Morris, Nonemaker, & Lipsitz, 1999). NHs are required to complete an MDS assessment on admission and every 3 months after admission, in addition to documenting significant changes in status on a quarterly basis (Lum, Lin, & Kane, 2005; Zimmerman, 2003). The MDS is used in preliminary screening to identify potential problems and strengths of residents (CMS, 2002/2008). It screens functional status elements and comprehensively assesses residents. In addition, quality of care can be monitored in NHs, as MDS data is collected regularly (Rantz et al., 1996). After implementation of the MDS in 1987, several researchers reported benefits, including restraint reduction (Hawes et al., 1997; Lum et al., 2005; Marek, Rantz, Fagin, & Krejci, 1996; Migdail, 1992), decreased dehydration (Blaum, O’Neill, Clements, Fries, & Fiatarone, 1997), and increased physical and cognitive function (Morris et al., 1997). However, further research to evaluate the impact of MDS is necessary.

In addition, it is also possible using the MDS to compare a particular facility with its peer group of NHs (Zimmerman, 2005). The individual data of residents in the MDS is meaningful, as the tool evaluates risk-adjusted health outcomes across facilities (Mukamel & Spector, 2003). All MDS records are conveyed by the CMS through state public health agencies to a national warehouse and are used to aid in the existing survey and certification processes that monitor NH quality (Mor et al., 2003). Since June 1998, all NHs certified by CMS have been required to submit MDS information electronically to the Health Care Financing Administration (renamed CMS in July 2001) on a quarterly basis (IOM, 2001). Thus, a longitudinal record of the clinical and psychosocial profiles of residents is now possible (Karon, Sainfort, & Zimmerman, 1999).

One of the MDS’ disadvantages is its inaccuracy. Abt Associates (Reilly, 2001), one of the largest for-profit government research organizations, reported that 11.65% of the 284 MDS items have high error rates (Chomiak et al., 2001). Accuracy was determined by comparing MDS data entered by trained RNs and a medical record review (Chomiak et al., 2001). Abt Associates concluded that cognitive patterns, psychosocial well-being, physical functioning, skin condition, and activity-pursuit patterns were reported with the least accuracy (Chomiak et al., 2001). Underreported MDS items included vision, health conditions, pain, and falls; overreported criteria included intravenous medication, intake and output, and physical, occupational, and speech therapies (Chomiak et al., 2001). The U.S. Department of Health and Human Services, Office of Inspector General (2001) compared the MDS with medical record reviews and found—consistent with Chomiak et al.’s (2001) findings—17% of errors of 640 residents’ MDS data. Furthermore, according to the U.S. General Accounting Office (2002), 9 of 11 states that had MDS accuracy-review programs discovered that some MDS categories have common errors. Such errors have been noted with the following items: mood and behavior, nursing rehabilitation and restorative care, ADLs, therapy, physician visits or orders, toileting plans, and skin conditions.

Another issue is the level of technology in NHs. According to a presentation by Harvell (2004), the Office of the National Coordinator on Health Information Technology, U.S. Department of Health and Human Services, found that the development stage of the information technology of the current MDS design and content dates from the early 1990s. Because MDS information technology has not been well developed, the use of electronic sources retrieved from MDS 2.0 has been limited (Harvell, 2004). It has been difficult to retrieve specific items that researchers want to use for research purposes.

Evaluation of the Psychometric Properties of MDS 2.0

Considering the role of the MDS (including placement, care planning, and reimbursement), its reliability and validity have become important issues in research (Lawton et al., 1998). Since all NHs certified by Medicare and Medicaid use the MDS, reliability and validity should be a prerequisite for use in practice. However, few studies have examined the validity and reliability of MDS 2.0 (Gruber-Baldini, Zimmerman, Mortimore, & Magaziner, 2000). Psychometric studies of MDS 2.0 regarding cognition, depression, perineal dermatitis, pain, urinary tract infection, nutrition, behavior, and ADLs are summarized in the Table.

Psychometric Properties of Minimum Data Set (MDS) 2.0 ItemsPsychometric Properties of Minimum Data Set (MDS) 2.0 Items

Table 1: Psychometric Properties of Minimum Data Set (MDS) 2.0 Items

Validity

The validity of the MDS has been studied in the areas of cognitive function, depression, perineal dermatitis, pain, urinary tract infection, nutrition/weight loss, behavior, and ADLs. Criterion validity is measured by comparing one instrument with an external criterion; construct validity determines that the instrument actually measures what researchers want to study (Polit & Beck, 2006). The majority of researchers have compared the MDS with validated research instruments to test its criterion validity, and results have varied widely. Some researchers tested construct validity by comparing two groups of NHs that had high and low prevalence on measured concepts.

Cognitive Function. In most studies, the cognitive performance scale of the MDS has been found to have high criterion validity (range = 0.41 to 0.92) (Cohen-Mansfield, Taylor, McConnell, & Horton, 1999; Gruber-Baldini et al., 2000; Hartmaier, Sloane, Guess, & Koch, 1994; Morris et al., 1994; Snowden et al., 1999) and moderate to high construct validity (range = 0.45 to 0.70) (Lawton et al., 1998). However, Horgas and Margrett (2001) reported that the cognitive performance scale of the MDS was only valid for a nondepressed sample, which means that this tool may not be valid for individuals with depression. Considering many NH residents have depression, future studies must confirm the validity of the MDS for such residents.

Depression. The validity of the MDS in measuring depression in NH residents has been questioned in many studies. Depression items have not been found to be significantly related to validated instruments of depression, such as the Cornell Scale for Depression in Dementia (Hendrix et al., 2003), the Geriatric Depression Scale (Koehler et al., 2005), or the Revised Memory and Behavior Problems Checklist for nondepressed samples (Horgas & Margrett, 2001). Its validity was found to be quite low in one study (Anderson, Buckwalter, Buchanan, Maas, & Imhof, 2003), moderate in another (Meeks, 2004), and quite high in a third (Burrows, Morris, Simon, Hirdes, & Phillips, 2000). Further studies are required to confirm the validity of the MDS in measuring depression.

Perineal Dermatitis. The validity of the MDS in measuring perineal dermatitis was supported by comparing risk factors of perineal dermatitis with data retrieved from residents’ charts (Toth, Bliss, Savik, & Wyman, 2008).

Pain. The validity of the MDS regarding pain items is also questionable. Some pain items were found to be significantly related to residents’ self-reports and geriatricians’ assessments of residents with moderate impairment, but no statistically significant relationship was found for residents with severe cognitive impairment (Cohen-Mansfield et al., 1999). Fisher et al. (2002) also reported that the pain items on the MDS were not significantly related to the proxy pain questionnaire developed by the researchers. Two research teams found that the validity of the pain item on the MDS was questionable in that the MDS underestimated the prevalence of pain for residents with cognitive impairment (Cohen-Mansfield et al., 1999; Fisher et al., 2002).

Urinary Tract Infection. MDS urinary tract infection items were not found to be related to those in surveillance data sets (Stevenson, Moore, & Sleeper, 2004). It was reported that the MDS overestimates the number of residents who have urinary tract infections, yet adequately estimates residents without urinary tract infections (Stevenson et al., 2004).

Nutrition/Weight Loss. In a study by Simmons, Lim, and Schnelle (2002), MDS weight loss quality indicators were found to reflect differences among two NH groups (low and high prevalence of weight loss), and the construct validity of these indicators were supported; however, the results for their criterion validity varied (Blaum et al., 1997; Simmons et al., 2002). The MDS was shown to underestimate the number of residents with risk factors of under-nutrition compared with interview assessment protocols used by the researchers (Simmons et al., 2002).

Behavior. The behavior items of the MDS have low to moderate criterion validity (range = 0.24 to 0.5) (Lawton et al., 1998; Snowden et al., 1999).

ADLs. ADLs, as measured by the MDS, were shown in two studies to have moderate to high validity (range = 0.5 to 0.98) (Lawton et al., 1998; Snowden et al., 1999). As these were the only two studies available at the time of this investigation, further research is necessary to confirm the validity of ADL items.

Reliability

Studies suggest that, overall, the MDS 2.0 has moderate to high reliability (Casten, Lawton, Parmelee, & Kleban, 1998; Hawes et al., 1995; Morris et al., 1997). The reliability was found to be high for the cognitive performance scale (Gruber-Baldini et al., 2000; Morris et al., 1994), moderate to high for pain items (Cohen-Mansfield et al., 1999; Fisher et al., 2002), and moderate to high for ADL items (Lawton et al., 1998; Morris, Fries, & Morris, 1999; Snowden et al., 1999). However, the reliability of depression items was found to be low (Anderson, Buckwalter, et al., 2003). Further research to test the reliability of the depression items is required.

Summary

In general, MDS 2.0 has been reported to have moderate to moderate/high validity and reliability. In terms of criterion validity, the cognitive performance scale, perineal dermatitis items, and ADL items were fairly good, while depression and behavior items were generally low. Pain and nutrition/weight loss items are still questionable. Construct validity was supported in the cognitive performance scale and nutrition/weight loss items. However, the construct validity of depression items was low.

Internal consistency of the cognitive performance scale and depression and pain items were supported. The stability of the MDS (test-retest reliability) was supported for the cognitive performance scale and pain items. However, a low-to-moderate coefficient was reported for depression items. The equivalence of MDS (interrater reliability) of overall MDS was quite acceptable and good for ADLs. Alarmingly, there is still a major concern that the MDS has failed to identify depression in elderly NH residents (Anderson, Buckwalter, et al., 2003; Koehler et al., 2005; Meeks, 2004). Considering many residents have depression, further research is required to revise the MDS 2.0 to improve its psychometric properties.

Threats to Reliability and Validity of MDS Data

Some of the potential threats to the reliability and validity of MDS data are the raters, training for completing the MDS, and those who administer the MDS in practice (i.e., direct caregivers versus administrative staff). The task of filling out the MDS should be performed by an RN who confirms the completion of the forms with a legal signature (CMS, 2002/2008). RNs fill out the MDS forms using a chart review, their own assessment, and other NH staff members’ assessments (Lum et al., 2005).

Concerns regarding random errors and bias have been raised because coordinators in charge of completing the MDS have varying resources and training (Lum et al., 2005). If NHs do not have RNs on staff, they are required to submit the MDS forms to RNs for certification (CMS, 2002/2008). NHs usually hire an RN MDS coordinator for this purpose, but each individual NH has its own policy regarding hiring staff responsible for completing MDS forms. These individuals’ backgrounds vary considerably. They may be RNs, attending physicians, social workers, activities specialists, occupational therapists, speech therapists, dietitians, or pharmacists (CMS, 2002/2008).

The clinical competence of the data recorders or MDS coordinators affects the accuracy of the data records (Aaronson & Burman, 1994; Lyons & Payne, 1974). For example, Anderson, Buckwalter, et al. (2003) raised concerns that some NH staff members were unable to identify symptoms related to mood, depression, or behaviors on MDS assessments because of inadequate training. Lawton et al. (1998) also raised concerns that some staff in one NH reported difficulty differentiating dementia and depression. The MDS may be completed by administrative staff without direct care responsibilities or by direct caregivers. When the MDS is filled out by administrative staff, the data records may not be as accurate as those completed by direct health care providers (McCurren, 2002; Schnelle, Wood, Schnelle, & Simmons, 2001).

This process may contribute to the failure to identify MDS problems because administrative staff’s contact time with residents is shorter than that of direct caregivers (Hendrix et al., 2003). Because large data sets are usually collected by many different facilities or states, they are usually assessed, collected, and documented by many people. Thus, the interrater variation, especially for subjective items, will threaten validity (VonKoss Krowchuk, Moore, & Richardson, 1995). The MDS also has an interrater variation problem because many factors affect MDS data collection (e.g., the clinical competence of recorders, patient dysfunctional status). In addition, an MDS completed by trained staff may have better psychometric properties than one administered by untrained staff (Ouslander, 1994).

The communication skills and extent to which direct caregivers can observe the real situation while completing the MDS is unknown, because the MDS can be completed using either a chart review or direct caregiver observations (Fisher et al., 2002). Although physical assessment findings are likely to be recorded accurately, data that require patient recall and interviews with patients are likely to have more discrepancies (Aaronson & Burman, 1994) because recall bias or prejudice may influence the data records. In addition, RNs may not have adequate time to complete an MDS, as they are overburdened in managing residents’ health problems.

The reliability of the MDS may also be threatened by residents’ unstable health and functional status, especially in NHs (Blaum et al., 1997). NHs have different mixes of residents, including severely ill elderly residents and terminally ill younger residents (“Home at Last,” 1999). Residents are generally in poor health, have three to six diagnoses, and receive 3 to 18 drugs per day (Mohler, 2001). Approximately 75% of residents need assistance with more than three ADLs (IOM, 2001; Kovner, Mezey, & Harrington, 2002). The MDS assessment of cognitive function and pain for a patient may fluctuate, because those items are influenced by subjective judgment, acute events, and medication schedules (Fisher et al., 2002). Considering that approximately half of NH residents have dementia, with approximately one third having Alzheimer’s disease (National Academy on an Aging Society, 2000), the data that require interviews with residents may not accurately reflect the resident’s status. These factors, as well as communication difficulties and cognitive impairment, can result in an unstable assessment. Having experienced and professional NH staff can help decrease MDS reliability problems (Hawes et al., 1995).

In addition, some information simply may not be recorded (Aaronson & Burman, 1994). For example, a study by Suri, Egleston, Brody, and Rudberg (1999) using MDS data found that of 2,780 residents, only 11% had advance directives, only 17% had a do-not-resuscitate order on admission, and only 6% among those who had been admitted without advance directives completed one after their admission. Mezey, Mitty, Bottrell, Ramsey, and Fisher (2000) reported that only 51% of all NH residents across the nation have advance directives.

External validity of the studies cited may be threatened because of the following limitations: (a) The samples were small (most of the studies had less than 200 participants), and (b) a majority of studies were conducted at a small number of NHs, limiting the generalizability of the findings. The facility characteristics (ratio of staffing to residents, size, and ownership) threaten external validity because those characteristics may limit generalization of findings from the MDS (Phillips, Hawes, Mor, Fries, & Morris, 1996; Reilly, 2001). Studies using larger data sets and samples that better represent the population would be necessary to address the appropriateness of using MDS data in research.

Measurement Error

Measurement error occurs due to the failure to match conceptual definitions between the theoretical connotations of a concept and its operational meaning (i.e., variables in data sets) (Lange & Jacox, 1993). In addition, an inappropriate choice of variables will also threaten validity: In designs in which researchers have a research question first and then search for the data, external validity is threatened because the data set may not represent the target population in the research. Conversely, internal validity is threatened because researchers may not control confounding variables (Lange & Jacox, 1993).

Several strategies have been suggested to improve the psychometric properties of the MDS:

Threats to Validity Common to Large Databases

Large health care data sets like the MDS are characterized as having the following (Connell, Diehr, & Hart, 1987):

  • Computer-based formats.
  • Large enough samples to accommodate a wide variety of statistical methods.
  • Availability to researchers who are not responsible for data collection.

The MDS and other large data sets have been used and are expected to be used in the future to improve quality, especially in long-term care research (Ryan, Stone, & Raynor, 2004). However, the use of large data sets may be inherently threatened by sampling and measurement errors (Jacob, 1984).

Sample Selection Error

The CMS data center holds and manages basic resident demographic and clinical information of all reported MDS data for the purposes of payment, surveys, certification, regulation, and research (CMS, 2005). All MDS records are stored on magnetic media, and the CMS’ safeguard system includes security codes, staff training for retrieving MDS information, and access to data restricted to authorized staff (CMS, 2005). This storage system, implemented and managed by the CMS, decreases concerns about data storage.

Large data sets like the MDS generally have no sample inclusion and exclusion criteria, as the databases are not developed as the outcome of a study protocol (Lange & Jacox, 1993). Thus, the sample may not represent the whole population of interest to researchers (Lange & Jacox, 1993). For example, data regarding NHs that are not certified by CMS are not available, and the MDS may not represent an entire NH population, although it is a major method of studying NHs.

Data Storage. Pabst (2001) provided several suggestions for managing MDS data:

  • Appropriate choices of hardware and software with expert technical support are essential.
  • As the MDS can be used across different facilities or over time, the data storage format should be consistent. For example, researchers should be cautious about the different formats (proprietary versus generic), especially regarding data transfer from one facility to another or from one research team to another, where different data storage formats may be used. Coding systems differ depending on what software is used.
  • MDS developers should set up data set structures carefully based on sample data sets before actually collecting large data sets, lessening the extra work of changing formats in the long term.
  • File backup of large data sets is necessary in the event that computer problems occur. If researchers do not consider the use of data without verifying the format, accuracy will be threatened.

Data Extraction. Some factors may make data extraction difficult. Part or all of an MDS may be missing, or records may have never been entered (Byar, 1980; VonKoss Krowchuk et al., 1995). For example, newly admitted residents do not yet have prepared MDS data in their charts. Additional extraction is sometimes needed to increase the completeness of a data set.

Data Collection and Documentation. Data documentation or coding of the MDS may be performed by the same person doing the assessment or by a different person who does not assess the resident. If the MDS data collection and coding are performed by the same person, shortcuts may be implemented to save time and effort. For example, data collectors can enter their observations directly into a laptop computer, if one is available (Pabst, 2001). However, if MDS recorders have to recall their observations when they document or code, recall bias can decrease the accuracy of the information (VonKoss Krowchuk et al., 1995). In the latter case, large amounts of data must be entered, increasing the possibility of error.

Another cause for data entry error is a demanding workload. MDS RNs and coordinators may be tired, and errors may occur (Pabst, 2001). Appropriate allocation of work and use of additional human resources during peak times is suggested to lessen error rates (Pabst, 2001). These stored data may be linked directly to the central research data repository, or copied data may be sent by regular or electronic mail (Pabst, 2001). These innovative methods will decrease errors because the process of data entry is shortened, but appropriate training for data collection and a regular check of data accuracy is necessary.

Conclusion

MDS 2.0 has been skillfully developed and is used widely. It is a very good source of research data, especially for measuring quality in long-term care settings. The MDS is a good source of data because researchers have access to large samples representing very large populations at state and national levels, as all NHs certified by CMS are required to use the form. Furthermore, the use of the MDS for secondary data analysis is advantageous in that it can save time and effort, can decrease expenses such as paying people to collect data, is useful in conducting exploratory and correlational studies, and is helpful in the examination of trends over time (Castle, 2003; Nicoll & Beyea, 1999; Rantz & Connolly, 2004). The MDS is an important source of data regarding NH residents and is useful for outcomes research, reimbursements, payments, surveys, certification, and regulation. In addition, the MDS is protected by the CMS’ safeguard systems, and researchers can compare resident outcomes across NHs.

However, use of the MDS in research has limitations (Castle, 2003; Nicoll & Beyea, 1999; Rantz & Connolly, 2004):

  • Investigators cannot directly retrieve data they want to use and cannot determine specific times and intervals.
  • The accuracy of the data may be questionable.
  • The data may be old.
  • Data in the MDS were collected for purposes other than research and may not include variables researchers want for their studies. Furthermore, researchers need to be aware that the retrieved data do not represent the entire NH population and that variation in MDS documentation remains a concern.

Ways to improve the psychometric properties of the MDS include the use of clear, standardized, and consistent definitions. Also, adequate and continuous training programs with specific and concrete instructions and protocols will help improve the quality of data collected from the MDS (Lawton et al., 1998). The stable, psychometric properties of MDS 2.0 are needed to provide care for all NH residents.

References

  • Aaronson, LS & Burman, ME1994. Use of health records in research: Reliability and validity issues. Research in Nursing & Health, 17, 67–73. doi:10.1002/nur.4770170110 [CrossRef]
  • Anderson, L, Connolly, B, Pratt, M & Shapiro, R. 2003, June. MDS 3.0 for nursing homes. Retrieved December 1, 2008, from http://www.cms.hhs.gov/NursingHomeQualityInits/25_NHQIMDS30.asp
  • Anderson, RL, Buckwalter, KC, Buchanan, RJ, Maas, ML & Imhof, SL2003. Validity and reliability of the minimum data set depression rating scale (MDSDRS) for older adults in nursing homes. Age and Ageing, 32, 435–438. doi:10.1093/ageing/32.4.435 [CrossRef]
  • Blaum, CS, O’Neill, EF, Clements, KM, Fries, BE & Fiatarone, MA1997. Validity of the minimum data set for assessing nutritional status in nursing home residents. American Journal of Clinical Nutrition, 66, 787–794.
  • Burrows, AB, Morris, JN, Simon, SE, Hirdes, JP & Phillips, C2000. Development of a minimum data set-based depression rating scale for use in nursing homes. Age and Ageing, 29, 165–172. doi:10.1093/ageing/29.2.165 [CrossRef]
  • Byar, DP1980. Why data bases should not replace randomized clinical trials. Biometrics, 36, 337–342. doi:10.2307/2529989 [CrossRef]
  • Casten, R, Lawton, MP, Parmelee, PA & Kleban, MH1998. Psychometric characteristics of the minimum data set I: Confirmatory factor analysis. Journal of the American Geriatrics Society, 46, 726–735.
  • Castle, JE2003. Maximizing research opportunities: Secondary data analysis. Journal of Neuroscience Nursing, 35, 287–290.
  • Centers for Medicare & Medicaid Services. 2005. MDS 3.0 update. Retrieved June 30, 2006, from http://www.cms.hhs.gov/NursingHomeQualityInits/downloads/MDS30MDS30Update.pdf
  • Centers for Medicare & Medicaid Services. 2008, July. Revised long-term care facility resident assessment instrument user’s manual. Version 2.0. Retrieved November 5, 2008, from http://www.cms.hhs.gov/nurs-inghomequalityinits/20_NHQIMDS20.asp (Original work published 2002)
  • Chomiak, A, Eccord, M, Frederickson, E, Glass, R, Glickman, M & Grigsby, J et al. . 2001. Final report: Development and testing of a minimum data set accuracy verification protocol Baltimore: Centers for Medicare & Medicaid Services.
  • Cohen-Mansfield, J, Taylor, L, McConnell, D & Horton, D1999. Estimating the cognitive ability of nursing home residents from the minimum data set. Outcomes Management for Nursing Practice, 3, 43–46.
  • Connell, FA, Diehr, P & Hart, LG1987. The use of large data bases in health care studies. Annual Review of Public Health, 8, 51–74. doi:10.1146/annurev.pu.08.050187.000411 [CrossRef]
  • Fisher, SE, Burgio, LD, Thorn, BE, Allen-Burge, R, Gerstle, J & Roth, DL et al. . 2002. Pain assessment and management in cognitively impaired nursing home residents: Association of certified nursing assistant pain report, minimum data set pain report, and analgesic medication use. Journal of the American Geriatrics Society, 50, 152–156. doi:10.1046/j.1532-5415.2002.50021.x [CrossRef]
  • Gruber-Baldini, AL, Zimmerman, SI, Morti-more, E & Magaziner, J2000. The validity of the minimum data set in measuring the cognitive impairment of persons admitted to nursing homes. Journal of the American Geriatrics Society, 48, 1601–1606.
  • Hartmaier, SL, Sloane, PD, Guess, HA & Koch, GG1994. The MDS cognition scale: A valid instrument for identifying and staging nursing home residents with dementia using the minimum data set. Journal of the American Geriatrics Society, 42, 1173–1179.
  • Harvell, J. 2004. Addressing the healthcare needs of our aging population with technology. Retrieved March 30, 2005, from the Institute of Electrical and Electronics Engineers. Web site: http://www.ieeeusa.org/calendar/conferences/geriatrictech/JennieHarvellHHS.ppt
  • Hawes, C, Mor, V, Phillips, CD, Fries, BE, Morris, JN & Steele-Friedlob, E et al. . 1997. The OBRA-87 nursing home regulations and implementation of the resident assessment instrument: Effects on process quality. Journal of the American Geriatrics Society, 45, 977–985.
  • Hawes, C, Morris, JN, Phillips, CD, Mor, V, Fries, BE & Nonemaker, S1995. Reliability estimates for the minimum data set for nursing home resident assessment and care screening (MDS). The Gerontologist, 35, 172–178. doi:10.1159/000117116 [CrossRef]
  • Hendrix, CC, Sakauye, KM, Karabatsos, G & Daigle, D2003. The use of the minimum data set to identify depression in the elderly. Journal of the American Medical Directors Association, 4, 308–312.
  • Home at last. 1999. Elder Care, 115, 32.
  • Horgas, AL & Margrett, JA2001. Measuring behavioral and mood disruptions in nursing home residents using the minimum data set. Outcomes Management for Nursing Practice, 5, 28–35.
  • Institute of Medicine. 2001. Improving the quality of long-term care. Retrieved November 6, 2008, from the National Academies Press. Web site: http://books.nap.edu/html/improving_long_term/
  • Jacob, H. 1984. Using published data: Errors and remedies Newbury Park, CA: Sage.
  • Karon, SL, Sainfort, F & Zimmerman, DR1999. Stability of nursing home quality indicators over time. Medical Care, 37, 570–579. doi:10.1097/00005650-199906000-00006 [CrossRef]
  • Koehler, M, Rabinowitz, T, Hirdes, J, Stones, M, Carpenter, GI & Fries, BE et al. . 2005. Measuring depression in nursing home residents with the MDS and GDS: An observational psychometric study. BMC Geriatrics, 5, 1. Retrieved November 5, 2008, from http://www.biomedcentral.com/1471#2318/5/1
  • Kovner, CT, Mezey, M & Harrington, C2002. Who cares for older adults? Work-force implications of an aging society. Health Affairs (Millwood), 21, 78–89. doi:10.1377/hlthaff.21.5.78 [CrossRef]
  • Lange, LL & Jacox, A1993. Using large data bases in nursing and health policy research. Journal of Professional Nursing, 9, 204–211. doi:10.1016/8755-7223(93)90037-D [CrossRef]
  • Lawton, MP, Casten, R, Parmelee, PA, Van Haitsma, K, Corn, J & Kleban, MH1998. Psychometric characteristics of the minimum data set II: Validity. Journal of the American Geriatrics Society, 46, 736–744.
  • Lum, TY, Lin, WC & Kane, RL2005. Use of proxy respondents and accuracy of minimum data set assessments of activities of daily living. Journals of Gerontology. Series A, Biological Sciences and Medical Sciences, 60, 654–659.
  • Lyons, TF & Payne, BC1974. The relationship of physicians’ medical recording performance to their medical care performance. Medical Care, 12, 714–720. doi:10.1097/00005650-197408000-00009 [CrossRef]
  • Marek, KD, Rantz, MJ, Fagin, CM & Krejci, JW1996. OBRA ’87: Has it resulted in better quality of care?Journal of Gerontological Nursing, 2210, 28–36.
  • McCurren, C2002. Assessment for depression among nursing home elders: Evaluation of the MDS mood assessment. Geriatric Nursing, 23, 103–108. doi:10.1067/mgn.2002.123796 [CrossRef]
  • Meeks, S2004. Further evaluation of the MDS depression scale versus the Geriatric Depression Scale among nursing home residents. Journal of Mental Health and Aging, 10, 325–335.
  • Mezey, MD, Mitty, EL, Bottrell, MM, Ramsey, GC & Fisher, T2000. Advance directives: Older adults with dementia. Clinics in Geriatric Medicine, 16, 255–268. doi:10.1016/S0749-0690(05)70056-2 [CrossRef]
  • Migdail, KJ1992. Nursing home reform: Five years later. Journal of American Health Policy, 25, 41–46.
  • Mohler, MM2001. Nursing home staffing adequacy: Nurses speak out. Policy, Politics, & Nursing Practice, 2, 128–133. doi:10.1177/152715440100200207 [CrossRef]
  • Mor, V, Berg, K, Angelelli, J, Gifford, D, Morris, J & Moore, T. 2003. The quality of quality measurement in U.S. nursing homes. The Gerontologist, 43(Special No. 2), 37–46.
  • Morris, JN, Fries, BE, Mehr, DR, Hawes, C, Phillips, C & Mor, V et al. . 1994. MDS cognitive performance scale. Journal of Gerontology, 49, M174–M182.
  • Morris, JN, Fries, BE & Morris, SA1999. Scaling ADLs within the MDS. Journals of Gerontology. Series A, Biological Sciences and Medical Sciences, 54, M546–M553.
  • Morris, JN, Nonemaker, S, Murphy, K, Hawes, C, Fries, BE & Mor, V et al. . 1997. A commitment to change: Revision of HC-FA’s RAI. Journal of the American Geriatrics Society, 45, 1011–1016.
  • Mukamel, DB & Spector, WD2003. Quality report cards and nursing home quality. The Gerontologist, 43(Special No. 2), 58–66.
  • National Academy on an Aging Society. 2000. Alzheimer’s disease and dementia: A growing challenge. Retrieved November 5, 2008, from http://www.agingsociety.org/agingsociety/pdf/Alzheimers.pdf
  • Nicoll, LH & Beyea, SC1999. Using secondary data analysis for nursing research. AORN Journal, 69, 428, 430, 433. doi:10.1016/S0001-2092(06)62504-0 [CrossRef]
  • Omnibus Budget Reconciliation Act of 1987, Pub. L. No. 100–203, §2, 101 Stat. 1330 (1987).
  • Ouslander, JG1994. Maximizing the minimum data set. Journal of the American Geriatrics Society, 42, 1212–1213.
  • Pabst, MK2001. Methodological considerations: Using large data sets. Outcomes Management for Nursing Practice, 5, 6–10.
  • Phillips, CD, Hawes, C, Mor, V, Fries, B & Morris, J. 1996. Evaluation of the nursing home resident assessment instrument Research Triangle Park, NC: Research Triangle Institute.
  • Polit, DF & Beck, CT2006. Essentials of nursing research: Methods, appraisal, and utilization (6th ed) Philadelphia: Lippincott Williams & Wilkins.
  • Rantz, MJ & Connolly, RP2004. Measuring nursing care quality and using large data sets in nonacute care settings: State of the science. Nursing Outlook, 52, 23–37. doi:10.1016/j.outlook.2003.11.002 [CrossRef]
  • Rantz, MJ, Mehr, DR, Conn, VS, Hicks, LL, Porter, R & Madsen, RW et al. . 1996. Assessing quality of nursing home care: The foundation for improving resident outcomes. Journal of Nursing Care Quality, 10, 1–9.
  • Rantz, MJ, Petroski, GF, Madsen, RW, Mehr, DR, Popejoy, L & Hicks, LL et al. . 2000. Setting thresholds for quality indicators derived from MDS data for nursing home quality improvement reports: An update. Joint Commission Journal on Quality Improvement, 26, 101–110.
  • Reilly, KE2001. Development and testing of a minimum data set accuracy verification protocol: Final report. Bethesda, MD: Abt Associates.
  • Ryan, J, Stone, RI & Raynor, CR2004. Using large data sets in long-term care to measure and improve quality. Nursing Outlook, 52, 38–44. doi:10.1016/j.outlook.2003.11.001 [CrossRef]
  • Schnelle, JF, Wood, S, Schnelle, ER & Simmons, SF2001. Measurement sensitivity and the minimum data set depression quality indicator. The Gerontologist, 41, 401–405.
  • Simmons, SF, Lim, B & Schnelle, JF2002. Accuracy of minimum data set in identifying residents at risk for undernutrition: Oral intake and food complaints. Journal of the American Medical Directors Association, 3, 140–145.
  • Snowden, M, McCormick, W, Russo, J, Srebnik, D, Comtois, K & Bowen, J et al. . 1999. Validity and responsiveness of the minimum data set. Journal of the American Geriatrics Society, 47, 1000–1004.
  • Stevenson, KB, Moore, JW & Sleeper, B2004. Validity of the minimum data set in identifying urinary tract infections in residents of long-term care facilities. Journal of the American Geriatrics Society, 52, 707–711. doi:10.1111/j.1532-5415.2004.52206.x [CrossRef]
  • Suri, DN, Egleston, BL, Brody, JA & Rudberg, MA1999. Nursing home resident use of care directives. Journals of Gerontology. Series A, Biological Sciences and Medical Sciences, 54, M225–M229.
  • Toth, AM, Bliss, DZ, Savik, K & Wyman, JF2008. Validating MDS data about risk factors for perineal dermatitis by comparing with nursing home records. Journal of Gerontological Nursing, 345, 12–18.
  • U.S. Department of Health and Human Services, Office of the Inspector General. 2001. Nursing home resident assessment quality of care (OEI-02-99-00040). Retrieved November 5, 2008, from http://www.oig.hhs.gov/oei/reports/oei-02-99-00040.pdf
  • U.S. General Accounting Office. 2002. Nursing homes. Federal efforts to monitor resident assessment data should complement state activities (Report No. GAO-02-279). Retrieved March 21, 2005, from http://www.gao.gov/new.items/d02279.pdf
  • VonKoss Krowchuk, H, Moore, ML & Richardson, L1995. Using health care records as sources of data for research. Journal of Nursing Measurement, 3, 3–12.
  • Won, A, Morris, J, Nonemaker, S & Lipsitz, LA1999. A foundation for excellence in long-term care: The minimum data set. Annals of Long-Term Care, 7, 92–97.
  • Zimmerman, DR2003. Improving nursing home quality of care through outcomes data: The MDS quality indicators. International Journal of Geriatric Psychiatry, 18, 250–257. doi:10.1002/gps.820 [CrossRef]
  • Zimmerman, DR2005. The MDS QI’s: A potential resource for consumers in monitoring care. Retrieved March 1, 2005, from the National Long Term Care Ombudsman Resource Center Web site: http://www.ltcombudsman.org/ombpublic/49_369_3131.cfm

Psychometric Properties of Minimum Data Set (MDS) 2.0 Items

Source N Number of NHs Criterion Validity Construct Validity Reliability
Criterion Validity Comparison Instruments Internal Consistency Test–Retest Inter-rater
Overall MDS
Hawes et al. (1995) 13 0.4 to 0.7
Morris et al. (1997) 187 21 0.39 to 1.0
Casten et al. (1998) 0.75 to 0.99
Cognitive Performance Scale
Morris et al. (1994) 2,172 Not reported High MMSE, TSI Reflected two different samples High
Hartmaier et al. (1994) 200 8 0.41 to 0.76 Global Deterioration Scale
Hartmaier et al. (1994) 200 8 High MMSE
Gruber-Baldini et al. (2000) 1,939 59 0.92 Cognitive Performance Scale
0.68 MMSE 0.85
0.66 Psychogeriatric Dependency Rating Scale, Orientation Scale
Cohen-Mansfield et al. (1999) 290 1 0.71 to 0.75 MMSE
0.75 to 0.77 Global Deterioration Scale
Snowden et al. (1999) 140 Not reported 0.45 MMSE
Horgas & Margrett (2001) 135 1 Significant only for nondepressed sample, not depressed sample RMBPC
Lawton et al. (1998) Mattis Dementia Rating Scale 0.45
Global Deterioration Scale 0.7
Depression
Meeks (2004) 91 1 0.5 GDS
Hendrix et al. (2003) 321 3 Not significant Cornell Scale for Depression in Dementia
Anderson, Buckwalter, et al. (2003) 145 3 0.09 to 0.23 Hamilton Depression Rating Scale
−0.07 to 0.19 GDS
0.10 to 0.26 Chart diagnosis
Koehler et al. (2005) 704 9 Not significant GDS
Burrows et al. (2000) 108 2 0.70 Hamilton Depression Rating Scale
0.69 Cornell Scale for Depression in Dementia
Lawton et al. (1998) 513 1 Monitoring of Side Effects Scale 0.15 to 0.44
Horgas & Margrett (2001) 135 1 Significant only for nondepressed sample, not depressed sample RMBPC 0.27 to 0.59
Perineal dermatitis
Toth et al. (2008) 43 2 Significant Chart review
Adequate
Pain
Cohen-Mansfield et al. (1999) 80 8 0.69 to 0.88
49a 0.48 to 0.85
31b 0.75 to 0.92
49a 0.89
31b 0.92
27b Some correlations were significant Residents’ self-reports
27a No significant relationship Residents’ self-reports
27b Some correlations were significant Geriatricians’ assessment
27a Not significant Geriatricians’ assessment
Fisher et al. (2002) 57 3 No significant relationship Proxy pain questionnaire developed by researchers 0.84 to 0.87
Urinary tract infection
Stevenson et al. (2004) Not reported 16 Not significant Surveillance data sets
Nutrition/weight loss
Simmons et al. (2002) 75 2 Not significant Interview assessment protocols by research staff
Blaum et al. (1997) 186 1 Significantly related Anthropometrical and bioelectrical measures of nutritional status
Simmons et al. (2002) 400 16 Reflected differences among two groups
Behavior
Snowden et al. (1999) 140 Not reported 0.5 Alzheimer’s Disease Patient Registry
Lawton et al. (1998) 513 1 0.24 to 0.37 Cohen-Mansfield Agitation Inventory
Horgas & Margrett (2001) 135 1 Significant only for nondepressed sample, not depressed sample RMBPC
Activities of daily living
Snowden et al. (1999) 140 Not reported 0.5 Dementia Rating Scale
Lawton et al. (1998) 513 1 0.58 to 0.79 Physical Self-Maintenance Scale
Morris et al. (1999) 0.75 to 0.94
Authors

Dr. Shin is Research Assistant Professor, and Dr. Scherer is Associate Professor, School of Nursing, State University of New York at Buffalo, Buffalo, New York.

Address correspondence to Juh Hyun Shin, PhD, RN, Research Assistant Professor, School of Nursing, State University of New York at Buffalo, 823 Kimball Tower, 3435 Main Street, Buffalo, NY 14214–3079; e-mail: .iamjoohyun@gmail.com

10.3928/00989134-20090101-09

Sign up to receive

Journal E-contents