“If you can't measure it, you can't improve it.” The provenance of this famous quote has been attributed to Peter Drucker, Lord Kelvin, and even Master Yoda. Despite the ambiguity of the source, the message is clear and something every scientist should heed. Unless we use valid and reliable measures, we can never know with any degree of confidence whether our research findings reflect reality, and ultimately, whether the novel interventions we test actually improve practice. Understandably, there is great interest in measurement among health care scientists.
In October 2017, the U.S. Department of Health and Human Services, along with private sector organizations, sponsored the first National Research Summit on Care, Services and Supports for Persons with Dementia and Their Caregivers at the National Institutes of Health. The meeting was part of the activities under the National Alzheimer's Project Act. The Final Report was released on April 27, 2018 (access https://aspe.hhs.gov/national-research-summit-care-services-and-supports-persons-dementia-and-their-caregivers). These recommendations will inform funders and service organizations alike. They were proposed by experts within the scientific and service communities and by individuals living with dementia. Among the recommendations were those that called for improved measurement as a way to move the science of dementia care forward.
This state of the science commentary is directed at the preliminary recommendation to develop and identify a broad array of outcome measures (objective and subjective) that are meaningful to different stakeholders. Five significant measurement challenges that nurse scientists confront when conducting research with individuals living with dementia are presented: (a) assessment of subjective memory complaints; (b) validity of self-report; (c) ecological validity of cognitive performance measures; (d) use of biomarkers (neuroimaging) for describing the biological dynamics of symptoms; and (e) effect of high variability in measurement on statistical significance. Methods for addressing these challenges are offered.
Assessment of Subjective Memory Complaints
Improving identification of individuals at high risk for cognitive decline, such as older adults in the preclinical stage of Alzheimer's disease (AD), is of scientific and clinical interest (Jack et al., 2017). AD is insidious in its onset; therefore, cognitive performance is slowly but progressively impacted over a long period prior to a diagnosis of mild cognitive impairment or AD (Jessen et al., 2014). This period is critical for initiating early interventions to potentially slow or delay the functional deficits that accompany cognitive decline. However, distinguishing cognitive changes indicative of future decline from those associated with normal cognitive aging, or other conditions such as depression, is a substantial challenge.
Self-reports of memory or other cognitive problems are a key component of preclinical AD detection, particularly because subtle problems are more likely to occur in complex, real-world situations than controlled clinical or research settings (Sperling et al., 2011). Reports of memory problems among cognitively intact older adults are associated with a two- to four-fold higher risk of AD over time (Eramudugolla, Cherbuin, Easteal, Jorm, & Anstey, 2012; Reisberg, Shulman, Torossian, Leng, & Zhu, 2010). However, measurement of self-reported memory is highly inconsistent across studies and is limited in specificity and sensitivity. Between 20% and 50% of older adults endorse self-reports of memory problems, and the majority of these individuals will not go on to develop AD (Fritsch, McClendon, Wallendal, Hyde, & Larsen, 2014; Jonker, Geerlings, & Schmand, 2000). Relatedly, factors such as social desirability bias, impaired awareness, and concerns about loss of independence may influence older adults' responses to questions about memory or cognitive performance. In addition, anxiety and depressive symptoms commonly co-occur with reports of memory problems (Hill et al., 2016). Cognitive symptoms, such as problems with memory, are common in depression, and anxiety can impact aspects of attention such as concentration (Gonda et al., 2015). Therefore, self-reports of cognitive problems are highly heterogeneous in their causes, and measurement development must consider the complexity of their assessment.
There is no standard measure for cognitive self-report. In response to the need for common terminology and research procedures to improve our understanding of the subjective experience of cognitive decline, the Subjective Cognitive Decline Initiative (SCD-I) working group was formed in 2012 (Jessen et al., 2014). The group includes AD researchers who investigate self-reported cognition, leaders from the International Working Group on pre-clinical AD, and the U.S. National Institute on Aging– Alzheimer's Association group, as well as key representatives of current AD detection studies. In its 2015 review of cognitive self-report measures used in large aging cohort studies, the SCD-I working group found that a wide variety of measures are used (Rabin et al., 2015). Inconsistencies in the construct assessed (e.g., memory complaints vs. ratings of memory ability), timeframe (e.g., ratings of perceived change across a period of 1 to ≥20 years), and response options (e.g., dichotomous questions vs. Likert scales) limit the ability to compare results across studies. Recent studies further demonstrate that the items used to assess self-reported cognition are differentially associated with indicators of successful aging beyond objective cognition, such as life satisfaction (Mogle, Hill, & McDermott, 2017) and health-related quality of life (Roehr et al., 2017).
As this area of science continues to develop, future measurement of cognitive self-report should consider the following: items should measure specific aspects of cognition that are relevant to everyday life; timeframes should be short and specific; interpretation of responses should reflect the measured concept (i.e., memory concerns are different than ratings of memory performance); and multimodal assessment that includes self, informant, and clinician ratings or observation may improve precision of pre-clinical AD detection (Hill et al., 2016; Rabin et al., 2015).
Validity of Self-Report
An accurate understanding of subjective states and experiences, such as well-being, pain, or mood, necessitates that the source of the data be the individual him/herself. Many nursing interventions target these states and experiences to improve quality of life for individuals living with dementia. Unlike investigators who conduct research in the general population and use self-report as a standard measure for subjective experiences, assumptions about the validity of self-report have led many dementia care investigators to rely on informant reports or observation by research staff (Kolanowski, Hoffman, & Hofer, 2007).
These assumptions are currently being challenged by research and practice communities and stakeholder groups that represent individuals living with dementia (Downs & Lord, 2017; Frank, Basch, & Selby, 2014; Taylor, DeMers, Vig, & Borson, 2012). In its revised 2018 Dementia Care Practice Recommendations, the Alzheimer's Association placed a strong emphasis on inclusion of the perspective of the individual with dementia in all phases of assessment, care planning, and management (Fazio, Pace, Maslow, Zimmerman, & Kallmyer, 2018). In the lead-up to the 2017 National Research Summit on Dementia Care, the Patient-Centered Outcomes Research Institute (PCORI) put forth a series of recommendations for why we should, and how we can, improve the validity of self-reported outcome measures in dementia care research. Both reports point to the moral imperative that practitioners and researchers have to respect and support the autonomy and self-determination of individuals living with dementia; acknowledge that self-report is the best way to obtain data on subjective states and experiences; and recognize that individuals living with dementia can provide these data. These reports also emphasize the fact that the ability to self-report depends on many individual (e.g., cognitive reserve, type of dementia) and environmental (e.g., use of sensory aides or cueing) factors as well as the complexity of the instruments used to obtain these data. Finally, they provide evidence that self-report is often at variance with informant report because of caregiver burden or lack of informant familiarity with the individual (Kolanowski et al., 2007; Martyr, Nelis, & Clare, 2014).
There are a number of methodological approaches that improve the validity of self-reports of individuals living with dementia. To begin, only valid cognitive screening tools should be used to confirm that self-report is not possible (Taylor et al., 2012). In that case, other methods for assessing subjective states such as non-verbal behavior and multiple informants can be used. Investigators should ensure that the environment is supportive of accurate assessment by eliminating extraneous distractions, using sensory aides and large print, and speaking and listening with the aim of promoting communication (Williams et al., 2018).
The complexity of the instrument used to obtain self-report data is also a critical factor. In-person interviews are likely better than those conducted over the telephone. Consideration should be given to the memory requirements of the instrument and the individual's capacity to recall distal events. Numerous Likert responses might be broken down into two or three options.
A recommendation for the field that came out of the PCORI pre-summit was to create an encyclopedia of tools that have been validated by disease stage for use by individuals living with dementia (PCORI, 2017). Research is also needed that helps us understand which individuals under which conditions can self-report about which outcomes. This level of specificity has implications for quality of care as well as quality of life. For example, an individual with delirium superimposed on dementia denies pain but the observational assessment reveals pain symptoms and treatment with an analgesic improves pain symptoms and function.
Ecological Validity of Cognitive Performance Measures
The extent to which our assessments of cognitive performance, whether objective or subjective, reflect an individual's real-world behavior is critical to the accurate, valid representation of cognitive status (Chaytor & Schmitter-Edgecombe, 2003; Dawson & Marcotte, 2017). Further, assessing real-world behavior connected to cognitive performance is key to understanding how, when, and what intervention strategies would be of most value to the individual and his/her family (Petersen et al., 2001). For example, knowing an individual's ability to remember a list of unrelated words given to him/her by a researcher or clinician does not necessarily tell us how well that same person can remember a grocery list (which includes conceptually related information) in his/her daily life. However, remembering a grocery list is an everyday cognitive task needed to function in the real world. Cognitive measures that reflect everyday cognitive tasks using stimuli and/or task demands consistent with what an individual experiences in daily life (e.g., reading and remembering instructions on a medication label) are considered ecologically valid.
Ecological validity broadly defined is a form of external validity specific to whether assessments adequately represent a sample of an individual's behavior that can be generalized to reflect real-world experiences and behaviors (Franzen & Wilhelm, 1996). Issues of ecological validity can apply to the assessments themselves, including the face validity of cognitive demands (verisimilitude) and whether the cognitive assessment predicts real-world behaviors, such as activities of daily living (veridicality) (Spooner & Pachana, 2006). Matching our clinical assessments to the cognitive demands individuals experience in their own lives remains understudied (Parsons, 2016) and is balanced against the counter pressure of using standardized measures (e.g., Mini-Mental State Examination) that provide norms for understanding performance relative to clinical benchmarks (Oren et al., 2015; Petersen et al., 2001). As one example, the Rivermead Behavioral Memory Test, although lengthy, incorporates real-world stimuli and task demands along with normative scoring to gauge an individual's performance (Cockburn, 1996).
An overlooked aspect of ecological validity that is particularly relevant for clinical assessment is the consideration of environmental and contextual influences on performance (Fahrenberg, Myrtek, Pawlik, & Perrez, 2007; Sliwinski et al., 2018). Most clinical cognitive assessments occur in atypical situations where individuals may be under duress (e.g., seeing a nurse practitioner [NP] to address symptoms, including those related to cognition) or otherwise feel environmental pressures on their performance (Parsons, 2016). The clinical context itself may play an important role in assessment and bias scores on measures. Performance may be improved through social pressure on motivation (Schmader, Johns, & Forbes, 2008) or hindered by anxiety over the implications of obtaining a low score (Reese, Cherry, & Norris, 1999). For example, better than typical scores might be observed for individuals with better relationships with their NP, whereas poorer than typical scores might be observed for individuals who are anxious about changes in their cognition and what that might mean for their independence. Akin to “white coat hypertension,” where some individuals experience higher blood pressure when readings are taken in clinical settings compared with natural environments, it is unclear whom environment influences and when these influences are most salient. However, acknowledging that these contextual influences limit the validity of scores obtained in clinical environments is an obvious first step for improving cognitive assessments.
The future of the science in cognitive assessment should work toward developing measures that better incorporate features of real-world cognitive demands and familiar stimuli but that also establish benchmarks for understanding scores in a clinically meaningful way (Dawson & Marcotte, 2017). As one example, a brief assessment that includes a test of memory for a grocery list that includes a standardized scoring system for potentially meaningful errors may provide information about everyday cognitive functioning. It is important to balance the demands clinicians face (e.g., time constraints, need for interpretable scores) with the ecological relevance of measures. The natural environment cannot be replicated in clinical settings, but measures can be developed that are a better match to the ultimate goals of such assessments (e.g., detection of functional impairment) (Chaytor & Schmitter-Edgecombe, 2003). As validation of these measures is ongoing, clinicians and researchers alike are recommended to consider multimodal assessments of cognition. Pairing objective measures of cognition with subjective reports of everyday cognitive difficulties builds a more detailed picture of cognitive functioning that can better inform practice and research.
Use of Biomarkers (Neuroimaging)
Advances in neuroimaging offer a unique opportunity to identify biomarkers associated with neurodegenerative disease using noninvasive methods to examine brain structure and function. Neuroimaging biomarkers include data obtained from neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission tomography (PET), which allow for better understanding and characterization of the underlying biological processes associated with neurodegenerative conditions. For example, structural MRI can be used to reveal changes in brain structure, such as volume changes in the grey matter or white matter. PET requires the use of a radioactive tracer to image various processes associated with neurodegeneration, such as uptake of fluorodeoxyglucose to measure functional effects of neuronal activity, or biochemical targets, such as beta-amyloid and tau, to measure protein deposition in the brain (Barthel, Schroeter, Hoffmann, & Sabri, 2015; Oxtoby et al., 2017). Because neurodegenerative disease is characterized by neuronal dysfunction and cell death, brain imaging studies allow inferences to be made about clinical outcomes in patients. However, the complexity of neurodegenerative disease poses several challenges to the measurement of neuroimaging biomarkers.
Neurodegenerative diseases are highly heterogeneous in nature. The molecular etiology differs not only by dementia type, but also by distinct pathological subtypes and neuroanatomic patterns of disease that occur within the same neurodegenerative condition (Oxtoby et al., 2017; Poulakis et al., 2018). Individual factors such as genetics and environment also contribute to intra-individual variability in neuroanatomy (Ge, Sabuncu, Smoller, Sperling, & Mormino, 2018; Placek et al., 2016). The great diversity in neuroanatomy has the potential to limit reliability and reproducibility of neuroanatomical biomarkers. Therefore, caution must be used in the interpretation of findings, particularly when a sample is small and less-well phenotyped. More work is needed to determine how individual differences such as genetics influence neuroanatomy. Finally, it would be useful to study well-powered heterogeneous groups across neurodegenerative conditions that would allow for observation of common neural etiologies across the dementias.
One major application of neuroimaging biomarkers in dementia is symptom mapping to identify neuroanatomical signatures of clinical symptoms. Neuroimaging research is useful for identifying dysfunctions in neuroanatomy that are related to the clinical presentation in individuals with neurodegenerative diseases. This method requires the use of a clinical measure to correlate with brain parameters. This approach is useful for understanding how the greater cognitive reserve in more highly educated individuals can result in better performance on cognitive measures relative to those who have less education but a similar pathological load (Lam, Masellis, Freedman, Stuss, & Black, 2013). On the other hand, this approach faces difficulty in finding gold standard clinical measures. Take, for example, the clinical symptom of apathy. Neuroimaging studies of apathy in individuals living with dementia reveal considerable variations between studies of apathy, even when apathy is evaluated in the same neurodegenerative disease group (Ducharme, Price, & Dickerson, 2018; Kolanowski et al., 2017). This lack of consistency across studies may be attributed to differences in scales used to measure apathy. Inconsistencies in the clinical measures used in studies may affect results and our interpretation of neuroimaging biomarkers of dementia symptoms (Massimo et al., 2015). Future meta-analytic approaches may contribute important knowledge about neurobiological factors that drive symptoms in dementia and their potential interrelationships.
Although neuroimaging biomarkers are central to understanding the biological mechanisms of neurodegenerative disease, key measurement issues remain. Given the complexity of neurodegenerative disease, it is unlikely that one single biomarker will emerge. Yet, as neuroimaging science continues to develop and novel imaging techniques emerge, new information can be captured to elucidate the pathological processes that underlie neurodegenerative disease. For example, integration of complementary information from different neuroimaging techniques such as the combined use of volumetric MRI measure of grey matter and diffusion tensor imaging of white matter tracts may be necessary to capture heterogeneity of disease pathology (McMillan et al., 2014). Network science is another rapidly growing area of neuroscience that seeks to understand patterns of anatomical connections and brain-network organization (Oxtoby et al., 2017; Rubinov & Sporns, 2010). The use of network features to determine new bio-markers to characterize neurodegenerative disease will be an important next step in neuroscience research. Looking forward, neuroimaging biomarker research will continue to generate new understandings of neurodegenerative disease, a crucial step in the development of novel treatments for individuals living with dementia.
Nurse researchers play an integral role in the development of explanatory models that can be tested using neuroimaging methods. Although neuroimaging is biomedical in nature, models that integrate physical and environmental factors to predict important outcomes will advance our understanding of the mechanistic dynamics of clinical symptoms (Kolanowski, Litaker, & Buettner, 2005; Massimo, Evans, & Grossman, 2014). In concert with a team of neuroimaging experts, nurses serve an important role as thought leaders to improve the measurement of neuroimaging biomarkers in dementia research.
Effect of High Variability on Statistical Significance
In any given group of individuals living with dementia, there is widespread variability in pathological burden and clinical symptoms. This high degree of heterogeneity is reflected in most aspects of the disease process, progression, and experience. Most notably, individuals living with dementia demonstrate: (a) varying rates of progression (Komarova & Thalhauser, 2011), (b) distinct brain pathology within and across etiologies (Jicha & Nelson, 2011; Lam et al., 2013), and (c) various degrees of resilience and response to the disease process as reflected by distinct clinical symptomatology (Boublay, Schott, & Krolak-Salmon, 2016; Negash et al., 2013). To complicate matters more, pathological heterogeneity and severity do not correlate to heterogeneity present in observed or subjectively reported clinical symptoms (Perez-Nievas et al., 2013). All of these underlying sources of heterogeneity translate to high variability in measurement of cognitive and non-cognitive symptoms (Bossers, van der Woude, Boersma, Scherder, & van Heuvelen, 2012; Lam et al., 2013; Lanctôt et al., 2017). Collectively, this variation produces serious challenges for inferences regarding statistical and clinical significance when using many current clinical measures. Many nurse scientists and clinicians may not fully understand or appreciate the influence of high variability on the validity of statistical inferences—a requisite to understanding how to draw and interpret appropriate conclusions about empirical findings.
High variability reduces the ability to detect statistical significance, particularly for tests of equivalence (Gruman, Cribbie, & Arpin-Cribbie, 2007). For this reason, it is critical that researchers carefully review statistical power estimates in light of known variability in the measures used. In one study that evaluated two established staging systems for dementia, the authors found relatively large standard deviations for stage duration, which were greater for late stage disease, exceeding 50% of the mean stage duration values (Komarova & Thalhauser, 2011). Accuracy in estimates of statistical power is reliant on accurate estimates of standard deviation. Standard deviation provides a measure of variability by defining dispersion of individual observations surrounding a given mean. The more variable the data, the larger the standard deviation. The standard deviation is often confused, or used interchangeably, with standard error of the mean (SEM), which is a distinct parameter that quantifies uncertainty in the estimate of the mean (Wullschleger, Aghlmandi, Egger, & Zwahlen, 2014). The distinction is important, as presenting data with the SEM rather than standard deviation is likely to make them seem less variable. High variability in dementia-specific measurement, as reflected most commonly by the standard deviation, has important implications for statistical validity. For example, smaller sample sizes with higher variability produce larger confidence intervals than larger sample sizes with lower variability—suggesting that situations of small sample size and/or high variability (lower statistical power) may produce conclusions of non-equivalence (Cribbie, Arpin-Cribbie, & Gruman, 2009; Cribbie, Gruman, & Arpin-Cribbie, 2004). The impact of high variability on statistical power is amplified for situations with unequal sample sizes (Gruman et al., 2007).
There are several specific steps that nurse scientists can take to respond to and address the high variability inherent in dementia-related measurement challenges. First, investigators should quantify and report variability in the measures used and ensure that inferences about statistical and clinical significance are appropriately cautioned in light of cases of high variability. Second, attempts can be made to reduce variability through improved measurement, using more equivalent or similar subpopulations, or selecting stronger experimental designs (as different experimental designs have differential error variance and thus varying rates of statistical power) (Lipsey, 1990; McClelland, 2000).
Experts have called for greater consistency in the measures used for clinical symptoms and the metrics applied for quantifying pathological burden. It is critical that the consensus process takes into account the variability of individual measures in addition to other qualities such as reliability, validity, and predictive and clinical value where appropriate. To date, several studies have evaluated variability within cognitive measures (Lam et al., 2013; Mungas et al., 2010); however, less research has examined variability with measures of non-cognitive symptoms such as behavioral and psychological symptoms in dementia (Gitlin, Marx, Stanley, Hansen, & Van Haitsma, 2014). Efforts to examine variability in these measures are often constrained by the use of instruments that aggregate groups of non-cognitive symptoms (Gitlin et al., 2014), diluting the ability to understand specific symptoms. Nevertheless, studies focused on examining variability in measures are needed; and valuable information may be available through ongoing studies with well-characterized cohorts. Longitudinal research is also needed because it can inform the adoption of specific measures for each disease stage, as the extent of variability is likely to change throughout the disease process.
Lastly, there is growing recognition that statistical methods used to evaluate intervention effectiveness alone may be insufficient for informing decision making regarding individually and clinically meaningful changes in outcomes. Some have proposed that scientists adopt and explicitly address the concept of sufficiently important difference, the “smallest amount of patient-valued benefit that an intervention would require to justify associated costs, risks and other harms” (Barrett, Brown, Mundt, & Brown, 2005, p. 251). Similar to the concept of minimal important difference, this appraisal of outcomes recognizes that findings regarding statistical significance may be inconsistent with the values, preferences, and needs of individuals living with dementia and their caregivers. Attempts to address the overwhelming influences of variability on statistical significance in dementia research should be complemented with attempts to evaluate and disseminate the clinical significance and value of new treatments and interventions.
To move the science of dementia care, services, and supports forward, strong measures are needed. There is widespread heterogeneity among individuals with cognitive impairments and the instruments currently used may not be valid or sensitive enough to accurately measure cognitive function, the neuroanatomical signatures of clinical symptoms, or meaningful person-centered outcomes. Measurement imprecision impacts the ability to obtain significance in statistical analyses. To improve the impact of our science, the measures used should reflect the real-world context of research participants and be as reliable and valid as possible.
- Barrett, B., Brown, D., Mundt, M. & Brown, R. (2005). Sufficiently important difference: Expanding the framework of clinical significance. Medical Decision Making, 25, 250–261. doi:10.1177/0272989X05276863 [CrossRef]
- Barthel, H., Schroeter, M.L., Hoffmann, K.T. & Sabri, O. (2015). PET/MR in dementia and other neurodegenerative diseases. Seminars in Nuclear Medicine, 45, 224–233. doi:10.1053/j.semnuclmed.2014.12.003 [CrossRef]
- Bossers, W.J., van der Woude, L.H., Boersma, F., Scherder, E.J. & van Heuvelen, M.J. (2012). Recommended measures for the assessment of cognitive and physical performance in older patients with dementia: A systematic review. Dementia and Geriatric Cognitive Disorders Extra, 2, 589–609. doi:10.1159/000345038 [CrossRef]
- Boublay, N., Schott, A.M. & Krolak-Salmon, P. (2016). Neuroimaging correlates of neuropsychiatric symptoms in Alzheimer's disease: A review of 20 years of research. European Journal of Neurology, 23, 1500–1509. doi:10.1111/ene.13076 [CrossRef]
- Chaytor, N. & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills. Neuropsychology Review, 13, 181–197. doi:10.1023/B:NERV.0000009483.91468.fb [CrossRef]
- Cockburn, J.M. (1996). Behavioural assessment of memory in normal old age. European Psychiatry, 11 (Suppl. 4), 205s. doi:10.1016/0924-9338(96)88591-9 [CrossRef]
- Cribbie, R.A., Arpin-Cribbie, C.A. & Gruman, J.A. (2009). Tests of equivalence for one-way independent groups designs. Journal of Experimental Education, 78, 1–13. doi:10.1080/00220970903224552 [CrossRef]
- Cribbie, R.A., Gruman, J.A. & Arpin-Cribbie, C.A. (2004). Recommendations for applying tests of equivalence. Journal of Clinical Psychology, 60, 1–10. doi:10.1002/jclp.10217 [CrossRef]
- Dawson, D.R. & Marcotte, T.D. (2017). Special issue on ecological validity and cognitive assessment. Neuropsychological Rehabilitation, 27, 599–602. doi:10.1080/09602011.2017.1313379 [CrossRef]
- Downs, M. & Lord, K. (2017). Person-centered dementia care in the community: A perspective from the United Kingdom. Journal of Gerontological Nursing, 43(8), 11–17. doi:10.3928/00989134-20170515-01 [CrossRef]
- Ducharme, S., Price, B.H. & Dickerson, B.C. (2018). Apathy: A neurocircuitry model based on frontotemporal dementia. Journal of Neurology, Neurosurgery, and Psychiatry, 89, 389–396. doi:10.1136/jnnp-2017-316277 [CrossRef]
- Eramudugolla, R., Cherbuin, N., Easteal, S., Jorm, A.F. & Anstey, K.J. (2012). Self-reported cognitive decline on the informant questionnaire on cognitive decline in the elderly is associated with dementia, instrumental activities of daily living and depression but not longitudinal cognitive change. Dementia and Geriatric Cognitive Disorders, 34, 282–291. doi:10.1159/000345439 [CrossRef]
- Fahrenberg, J., Myrtek, M., Pawlik, K. & Perrez, M. (2007). Ambulatory assessment—Monitoring behavior in daily life settings. European Journal of Psychological Assessment, 23, 206–213. doi:10.1027/1015-5722.214.171.124 [CrossRef]
- Fazio, S., Pace, D., Maslow, K., Zimmerman, S. & Kallmyer, B. (2018). Alzheimer's Association dementia care practice recommendations. The Gerontologist, 58(Suppl. 1), S1–S9. doi:10.1093/geront/gnx182 [CrossRef]
- Frank, L., Basch, E. & Selby, J.V. (2014). The PCORI perspective on patient-centered outcomes research. JAMA, 312, 1513–1514. doi:10.1001/jama.2014.11100 [CrossRef]
- Franzen, M.D. & Wilhelm, K.L. (1996). Conceptual foundations of ecological validity in neuropsychological assessment. In Sbordone, R.J. & Long, C.J. (Eds.),Ecological validity of neuropsychological testing (pp. 91–112). Delray Beach, FL: GR Press/St. Lucie Press.
- Fritsch, T., McClendon, M.J., Wallendal, M.S., Hyde, T.F. & Larsen, J.D. (2014). Prevalence and cognitive bases of subjective memory complaints in older adults: Evidence from a community sample. Journal of Neurodegenerative Diseases, 2014, 176843. doi:10.1155/2014/176843 [CrossRef]
- Ge, T., Sabuncu, M.R., Smoller, J.W., Sperling, R.A. & Mormino, E.C. (2018). Dissociable influences of APOE epsilon4 and polygenic risk of AD dementia on amyloid and cognition. Neurology, 90, e1605–e1612. doi:10.1212/WNL.0000000000005415 [CrossRef]
- Gitlin, L.N., Marx, K.A., Stanley, I.H., Hansen, B.R. & Van Haitsma, K.S. (2014). Assessing neuropsychiatric symptoms in people with dementia: A systematic review of measures. International Psychogeriatrics, 26, 1805–1848. doi:10.1017/S1041610214001537 [CrossRef]
- Gonda, X., Pompili, M., Serafini, G., Carvalho, A.F., Rihmer, Z. & Dome, P. (2015). The role of cognitive dysfunction in the symptoms and remission from depression. Annals of General Psychiatry, 14, 27. doi:10.1186/s12991-015-0068-9 [CrossRef]
- Gruman, J.A., Cribbie, R.A. & Arpin-Cribbie, C.A. (2007). The effects of heteroscedasticity on tests of equivalence. Journal of Modern Applied Statistical Methods, 6, 132–140. doi:10.22237/jmasm/1177992720 [CrossRef]
- Hill, N.L., Mogle, J., Wion, R., Munoz, E., DePasquale, N., Yevchak, A.M. & Parisi, J.M. (2016). Subjective cognitive impairment and affective symptoms: A systematic review. The Gerontologist, 56, e109–e127. doi:10.1093/geront/gnw091 [CrossRef]
- Jack, C.R., Bennett, D.A., Blennow, K., Carrillo, M.C., Dunn, B., Elliott, C.L. & Jessen, F. (2017). 2017 NIA-AA research framework to investigate the Alzheimer's disease continuum. Alzheimer's & Dementia, 13, P890–P891. doi:10.1016/j.jalz.2017.07.294 [CrossRef]
- Jessen, F., Amariglio, R.E., van Boxtel, M., Breteler, M., Ceccaldi, M., Chételat, G. & Wagner, M. (2014). A conceptual framework for research on subjective cognitive decline in preclinical Alzheimer's disease. Alzheimer's & Dementia, 10, 844–852. doi:10.1016/j.jalz.2014.01.001 [CrossRef]
- Jicha, G.A. & Nelson, P.T. (2011). Management of frontotemporal dementia: Targeting symptom management in such a heterogeneous disease requires a wide range of therapeutic options. Neurodegenerative Disease Management, 1, 141–156. doi:10.2217/nmt.11.9 [CrossRef]
- Jonker, C., Geerlings, M.I. & Schmand, B. (2000). Are memory complaints predictive for dementia? A review of clinical and population-based studies. International Journal of Geriatric Psychiatry, 15, 983–991. doi:10.1002/1099-1166(200011)15:11<983::AID-GPS238>3.0.CO;2-5 [CrossRef]
- Kolanowski, A., Boltz, M., Galik, E., Gitlin, L.N., Kales, H.C., Resnick, B. & Scerpella, D. (2017). Determinants of behavioral and psychological symptoms of dementia: A scoping review of the evidence. Nursing Outlook, 65, 515–529. doi:10.1016/j.outlook.2017.06.006 [CrossRef]
- Kolanowski, A., Hoffman, L. & Hofer, S.M. (2007). Concordance of self-report and informant assessment of emotional well-being in nursing home residents with dementia. Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 62, P20–P27. doi:10.1093/geronb/62.1.P20 [CrossRef]
- Kolanowski, A.M., Litaker, M. & Buettner, L. (2005). Efficacy of theory-based activities for behavioral symptoms of dementia. Nursing Research, 54, 219–228. doi:10.1097/00006199-200507000-00003 [CrossRef]
- Komarova, N.L. & Thalhauser, C.J. (2011). High degree of heterogeneity in Alzheimer's disease progression patterns. PLoS Computational Biology, 7(11), e1002251. doi:10.1371/journal.pcbi.1002251 [CrossRef]
- Lam, B., Masellis, M., Freedman, M., Stuss, D.T. & Black, S.E. (2013). Clinical, imaging, and pathological heterogeneity of the Alzheimer's disease syndrome. Alzheimer's Research & Therapy, 5(1), 1. doi:10.1186/alzrt155 [CrossRef]
- Lanctôt, K.L., Amatniek, J., Ancoli-Israel, S., Arnold, S.E., Ballard, C., Cohen-Mansfield, J. & Boot, B. (2017). Neuropsychiatric signs and symptoms of Alzheimer's disease: New treatment paradigms. Alzheimer's & Dementia, 3, 440–449. doi:10.1016/j.trci.2017.07.001 [CrossRef]
- Lipsey, M.W. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park, CA: Sage.
- Martyr, A., Nelis, S.M. & Clare, L. (2014). Predictors of perceived functional ability in early-stage dementia: Self-ratings, informant ratings and discrepancy scores. International Journal of Geriatric Psychiatry, 29, 852–862. doi:10.1002/gps.4071 [CrossRef]
- Massimo, L., Evans, L.K. & Grossman, M. (2014). Differentiating subtypes of apathy to improve person-centered care in fronto-temporal degeneration. Journal of Gerontological Nursing, 40(10), 58–65. doi:10.3928/00989134-20140827-01 [CrossRef]
- Massimo, L., Powers, J.P., Evans, L.K., McMillan, C.T., Rascovsky, K., Eslinger, P. & Grossman, M. (2015). Apathy in frontotemporal degeneration: Neuroanatomical evidence of impaired goal-directed behavior. Frontiers in Human Neuroscience, 9, 611. doi:10.3389/fnhum.2015.00611 [CrossRef]
- McClelland, G.H. (2000). Increasing statistical power without increasing sample size. American Psychologist, 55, 963–964. doi:10.1037/0003-066X.55.8.963 [CrossRef]
- McMillan, C.T., Avants, B.B., Cook, P., Ungar, L., Trojanowski, J.Q. & Grossman, M. (2014). The power of neuroimaging biomarkers for screening frontotemporal dementia. Human Brain Mapping, 35, 4827–4840. doi:10.1002/hbm.22515 [CrossRef]
- Mogle, J.A., Hill, N. & McDermott, C. (2017). Subjective memory in a national sample: Predicting psychological well-being. Gerontology, 63, 460–468. doi:10.1159/000466691 [CrossRef]
- Mungas, D., Beckett, L., Harvey, D., Farias, S.T., Reed, B., Carmichael, O. & DeCarli, C. (2010). Heterogeneity of cognitive trajectories in diverse older persons. Psychology and Aging, 25, 606–619. doi:10.1037/a0019502 [CrossRef]
- Negash, S., Xie, S., Davatzikos, C., Clark, C.M., Trojanowski, J.Q., Shaw, L.M. & Arnold, S.E. (2013). Cognitive and functional resilience despite molecular evidence of Alzheimer's disease pathology. Alzheimer's & Dementia, 9(3), e89–e95. doi:10.1016/j.jalz.2012.01.009 [CrossRef]
- Oren, N., Yogev-Seligmann, G., Ash, E., Hendler, T., Giladi, N. & Lerner, Y. (2015). The Montreal Cognitive Assessment in cognitively-intact elderly: A case for age-adjusted cutoffs. Journal of Alzheimer's Disease, 43, 19–22. doi:10.3233/JAD-140774 [CrossRef]
- Oxtoby, N.P., Garbarino, S., Firth, N.C., Warren, J.D., Schott, J.M. & Alexander, D.C. (2017). Data-driven sequence of changes to anatomical brain connectivity in sporadic Alzheimer's disease. Frontiers in Neurology, 8, 580. doi:10.3389/fneur.2017.00580 [CrossRef]
- Parsons, T.D. (2016). Ecological validity clinical neuropsychology and technology. Geneva, Switzerland: Springer. doi:10.1007/978-3-319-31075-6 [CrossRef]
- Patient-Centered Outcomes Research Institute. (2017). Dementia methods pre-summit summary and recommendations. Retrieved from https://aspe.hhs.gov/system/files/pdf/257891/DementiaMethods.pdf
- Perez-Nievas, B.G., Stein, T.D., Tai, H.-C., Dols-Icardo, O., Scotton, T.C., Barroeta-Espar, I. & Gómez-Isla, T. (2013). Dissecting phenotypic traits linked to human resilience to Alzheimer's pathology. Brain, 136, 2510–2526. doi:10.1093/brain/awt171 [CrossRef]
- Petersen, R.C., Stevens, J.C., Ganguli, M., Tangalos, E.G., Cummings, J.L. & DeKosky, S.T. (2001). Practice parameter: Early detection of dementia: Mild cognitive impairment (an evidence-based review). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology, 56, 1133–1142. doi:10.1212/WNL.56.9.1133 [CrossRef]
- Placek, K., Massimo, L., Olm, C., Ternes, K., Firn, K., Van Deerlin, V. & McMillan, C.T. (2016). Cognitive reserve in frontotemporal degeneration: Neuroanatomic and neuropsychological evidence. Neurology, 87, 1813–1819. doi:10.1212/WNL.0000000000003250 [CrossRef]
- Poulakis, K., Pereira, J.B., Mecocci, P., Vellas, B., Tsolaki, M., Kloszewska, I. & Westman, E. (2018). Heterogeneous patterns of brain atrophy in Alzheimer's disease. Neurobiology of Aging, 65, 98–108. doi:10.1016/j.neurobiolaging.2018.01.009 [CrossRef]
- Rabin, L.A., Smart, C.M., Crane, P.K., Amariglio, R.E., Berman, L.M., Boada, M. & Sikkes, S.A. (2015). Subjective cognitive decline in older adults: An overview of self-report measures used across 19 international research studies. Journal of Alzheimer's Disease, 48(Suppl. 1), S63–S86. doi:10.3233/JAD-150154 [CrossRef]
- Reese, C.M., Cherry, K.E. & Norris, L.E. (1999). Practical memory concerns of older adults. Journal of Clinical Geropsychology, 5, 231–244. doi:10.1023/A:1022984622951 [CrossRef]
- Reisberg, B., Shulman, M.B., Torossian, C., Leng, L. & Zhu, W. (2010). Outcome over seven years of healthy adults with and without subjective cognitive impairment. Alzheimer's & Dementia, 6, 11–24. doi:10.1016/j.jalz.2009.10.002 [CrossRef]
- Roehr, S., Luck, T., Pabst, A., Bickel, H., König, H.-H., Lühmann, D. & Riedel-Heller, S.G. (2017). Subjective cognitive decline is longitudinally associated with lower health-related quality of life. International Psychogeriatrics, 29, 1939–1950. doi:10.1017/S1041610217001399 [CrossRef]
- Rubinov, M. & Sporns, O. (2010). Complex network measures of brain connectivity: Uses and interpretations. Neuroimage, 52, 1059–1069. doi:10.1016/j.neuroimage.2009.10.003 [CrossRef]
- Schmader, T., Johns, M. & Forbes, C. (2008). An integrated process model of stereotype threat effects on performance. Psychological Review, 115, 336–356. doi:10.1037/0033-295X.115.2.336 [CrossRef]
- Sliwinski, M.J., Mogle, J.A., Hyun, J., Munoz, E., Smyth, J.M. & Lipton, R.B. (2018). Reliability and validity of ambulatory cognitive assessments. Assessment, 25, 14–30. doi:10.1177/1073191116643164 [CrossRef]
- Sperling, R.A., Aisen, P.S., Beckett, L.A., Bennett, D.A., Craft, S., Fagan, A.M. & Phelps, C.H. (2011). Toward defining the preclinical stages of Alzheimer's disease: Recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & Dementia, 7, 280–292. doi:10.1016/j.jalz.2011.03.003 [CrossRef]
- Spooner, D.M. & Pachana, N.A. (2006). Ecological validity in neuropsychological assessment: A case for greater consideration in research with neurologically intact populations. Archives of Clinical Neuropsychology, 21, 327–337. doi:10.1016/j.acn.2006.04.004 [CrossRef]
- Taylor, J.S., DeMers, S.M., Vig, E.K. & Borson, S. (2012). The disappearing subject: Exclusion of people with cognitive impairment and dementia from geriatrics research. Journal of the American Geriatrics Society, 60, 413–419. doi:10.1111/j.1532-5415.2011.03847.x [CrossRef]
- Williams, K.N., Perkhounkova, Y., Jao, Y.-L., Bossen, A., Hein, M., Chung, S. & Turk, M. (2018). Person-centered communication for nursing home residents with dementia: Four communication analysis methods. Western Journal of Nursing Research, 40, 1012–1031. doi:10.1177/0193945917697226 [CrossRef]
- Wullschleger, M., Aghlmandi, S., Egger, M. & Zwahlen, M. (2014). High incorrect use of the standard error of the mean (SEM) in original articles in three cardiovascular journals evaluated for 2012. PLoS One, 9(10), e110364. doi:10.1371/journal.pone.0110364 [CrossRef]