Many people experience forgetfulness as they age, but it’s often difficult to tell if these memory issues are a normal part of aging or a sign of something more serious.
A new study finds that a simple, self-administered test developed by researchers at The Ohio State University Wexner Medical Center, College of Medicine and College of Public Health can identify the early, subtle signs of dementia sooner than the most commonly used office-based standard cognitive test.
This earlier detection by the Self-Administered Gerocognitive Examination (SAGE test) is critical to effective treatment, especially as new therapeutics for dementia and Alzheimer’s disease are being developed and approved.
While the test does not definitively diagnose problems like Alzheimer’s, it allows doctors to get a baseline of their patients’ cognitive functioning, and repeat testing allows them to follow their memory and thinking abilities over time. “Often primary care physicians may not recognize subtle cognitive deficits during routine office visits,” Scharre said.
The eight-year study followed 665 consecutive patients in Ohio State’s Center for Cognitive and Memory Disorders. Researchers found that the SAGE test accurately identified patients with mild cognitive impairment who eventually progressed to a dementia diagnosis at least six months earlier than the most commonly used testing method called the Mini-Mental State Examination (MMSE).
Among the 164 patients with baseline mild cognitive impairment, 70 patients converted to dementia. This is a 43% conversion rate over three to four years which is similar to rates from other academic center-based studies, Scharre said.
The test can be taken anywhere whenever there are cognitive concerns. It takes only about 10-15 minutes to complete, and the four interchangeable forms are designed to reduce learning effects from recurrent testing over time.
“Any time you or your family member notices a change in your brain function or personality you should take this test,” Scharre said. “If that person takes the test every six months and their score drops two or three points over a year and a half, that is a significant difference, and their doctor can use that information to get a jump on identifying the causes of the cognitive loss and to make treatment decisions.”
This digital version will also be integrated with the Ohio State Wexner Medical Center’s electronic medical records system to better facilitate self-testing, storing and reviewing results for patients and their health care providers.
“Based on cognitive score changes, clinicians and families may decide it is time to act on safety and supervision needs. This might include, for example, medication oversight, financial assistance, driving limitations, setting up durable Powers of Attorney and other legal arrangements/trusts, change in living arrangements, and enhanced caregiving support,” Scharre said.
More than 6 million Americans have Alzheimer’s disease, and those numbers are expected to rise to more than 13 million by 2050. Deaths from Alzheimer’s and other dementias have increased 16% during the COVID-19 pandemic, according to the Alzheimer’s Association.
Discussion
In this systematic review, we evaluated 10 brief self-administered computerized cognitive assessment measures designed to detect cognitive disorders in older adults. Similar to past reviews of computerized cognitive tools (20, 21), we found significant variability across measures with regard to characteristics and design of the tools, sizes of validation samples, availability in different languages, and psychometric qualities, all of which are crucial considerations for potential widescale implementation of these measures in clinical care. Specifically, we found that few of the reviewed measures were validated in sufficiently large samples (CAMCI, CogState BB) and are available in multiple languages (CANS-MCI, CNSVS, CogState, CogState BB).
Test-retest reliability, which is critical for self-administered tools aiming to monitor cognitive functions, was reported only on 60% of the tools, and internal consistency measures were reported on even fewer measures. While almost all reviewed measures reported data on concurrent validity, the estimates for several individual domain subtests within some tools were low.
These findings are concerning, particularly when considering the need for a battery to distinguish among different types of MCI and dementia and inform differential diagnoses in non-specialty settings (16). On the other hand, we found that most measures required minimal involvement of an examiner in test administration and scoring of results and were available as standalone applications on several device types (e.g., PC, tablet computer, etc.).
These features are important benefits of self-administered computerized tools, particularly if additional built-in functionality for integration of results into electronic medical records (EMR) systems is developed (54). In general, despite the promise that self-administered cognitive tests hold for clinical applications, important gaps in scientific rigor in development, validation, and feasibility studies of these measures remain. Below we discuss critical areas of need for future development and validation of self-administered cognitive measures that would facilitate their potential for widescale clinical implementation.
One of the most critical gaps identified in the current review is the size and demographic constitution of the validation samples. In particular, several studies included in this review included fairly small (<50 participants in each diagnostic group) validation samples, and the majority of validations samples were comprised of White, highly educated individuals.
Because we did not identify systematic reporting of the power analyses for detecting main effects in the reviewed studies, we applied a generous estimate of 50 participants per group as part of our criteria. However, given the variability in statistical approaches used in these studies, reporting of robust power estimations would not only support the overall results but also ensure transparency, comparability, and generalizability of results across cohorts.
Another important finding of this review is the scarcity of feasibility and implementation studies of self-administered instruments in care settings. In contrast to highly standardized research settings, self-administration of cognitive assessments in the real world may be subject to interruptions and practical limitations such as time and space, which could be detrimental not only to feasibility but also to the validity of results (22).
Some domains, such as orientation, may not be applicable for self-administration altogether, as it would be difficult to ensure the fidelity of responses on such tasks in absence of examiner. Given these considerations, research on development and validation of self-administered computerized measures must be supported by well-designed feasibility and implementation studies, which will critically inform the clinical utility of these measures in intended settings.
Specifically, feasibility and implementation studies have the potential to identify facilitators and barriers to clinical applications, inform development of optimal diagnostic and care pathways, and, based on the insights from 2 measures (CAMCI, 53,55 and CogState BB, 56) studied in clinical settings, are critical for informing targeted solutions for individual practices.
The automated delivery of results is key to the clinical utility of computerized tools. To facilitate integration of self-administered tests in non-specialty settings, they should have easy-to-interpret, safe automated report delivery, which would ideally inform the provider on follow-up care and diagnostic considerations based on evidence-based practice guidelines (54).
Out of the measures reviewed, only CANS-MCI features an automated report that provides such recommendations to physicians. Moreover, a study on CAMCI with primary care physicians (53) suggested that providers expressed a need for training in interpretation of the report, which highlights the need for refinement of automated reporting and empirical studies on non-specialty providers’ attitudes and perceptions of cognitive testing results.
With regard to patient-level characteristics, there are number of critical considerations, particularly given the dearth of normative or validation data in older adults who are racially/ethnically diverse and have low educational attainment. Importantly, one of the prior studies on CogState BB reported that older adults with lower education were less likely to meet the integrity criteria on 3/5 subtests of the battery (47).
This is a major issue given that one of the most promising potentials of self-administered cognitive assessments is supporting services in remote areas and populations less likely to seek specialty evaluations. Moreover, numerous studies suggest that older adults in the U.S. who report Hispanic ethnicity, non-White race, or low education are at a higher risk for neurodegenerative diseases (7) and experience significant disparities in healthcare access and delivery (57).
Well-validated self-administered assessments may help substantially reduce these disparities given their potential to deliver tests in different languages (23) but only if they undergo rigorous scientific and cross-cultural validation development. In addition to language and education variables, it is important to validate computerized tools across socioeconomic groups, as past evidence suggests that older adults with lower socioeconomic status reported lower levels of intention to use computerized cognitive testing (24). Finally, successful clinical implementation of even the most well-validated tools would likely require continuous efforts for education and outreach to patients belonging to underrepresented groups as well as their medical providers and families.
Another important variable to consider for self-administration of computerized cognitive measures is the impact of familiarity with technology on test results. While some studies (CCS; 38) reported no differences in test scores between older adults with and without technology experience, these variables do appear to play a significant role through interactions with age (CogState BB; 48) and diagnostic status (C-TOC; 49).
Moreover, comparisons between content-equivalent paper and electronic version measures revealed that older adults with no technology experience performed worse on the electronic version of the measure compared to those with digital proficiency (eSAGE; 50). Finally, while most studies examine the associations between familiarity with technology and computerized cognitive testing results on a group-based level, systematic research on the impact of these variables for individual patients is necessary to support utility of self-administered assessments in clinical practice.
Regarding technical considerations, the practice parameters on optimal development and validation of computerized cognitive tools, including issues related to end-user agreements, privacy, data security and reporting (23, 25), are highly relevant to self-administered paradigms. Of particular relevance are challenges related to the use of bring your own device (BYOD) model and dependence on broadband connection, which pose a threat for timing and measurement error and may thus lead to inaccurate interpretation of results. Built-in integrity measures designed to address this challenge are features of some self-administered tools (e.g., CogState; 47,48), but are not widely available across reviewed instruments.
Moreover, because of rapidly evolving hardware and operating systems across both PC and tablet platforms, computerized assessments require continuous quality assurance testing and software maintenance investments, and these challenges are greater when many devices (i.e., BYOD models) are supported. Additionally, availability of a measure on multiple devices also requires supporting research to establish the equivalence of normative and psychometric data across different platforms and input parameters, such as touchscreen vs. keyboard response, screen size, etc.
Finally, past studies highlighted concerns regarding underreporting of privacy and security safeguards and their limitations on currently available computerized measures (22), and test developers should strive to explicitly disclose any potential consequences of data loss or breaches, particularly for individual patients in clinical settings. As such, collaborative efforts among researchers, funding bodies, industry, policy regulators, and consumers are necessary to develop robust, sustainable platforms supporting optimal levels of security, privacy, confidentiality, and potential functionality of data sharing across sites in order to promote and maintain successful implementation of computerized tools into everyday clinical practice while meeting programming cost demands.
This study has a number of limitations. First, while all attempts were made to conduct a comprehensive search of available literature, our results were limited to studies available in databases searched. Second, due to variability in study design and test statistics, quantitative summary of the findings was not possible. Finally, our review was limited to inclusion of studies that reported on instruments available at least in English language, and a number of promising self-administered computerized cognitive measures validated in non-English cohorts were not considered.
At the same time, a major strength of this study is the scope of reviewed characteristics of the included measures, including not only psychometric qualities but also functional and technological features critical for clinical implementation considerations. Additionally, our review is conducted at a point in time when the need for self-administered cognitive assessments has never been so dire in both clinical and research settings.
In light of rapidly developing technologies for identifying disease biomarkers, future studies should examine the associations of a variety of self-administered cognitive assessments with biomarkers of neurodegenerative diseases, particularly given promising existing studies within this line of research (CogState; 58 and Computerized Cognitive Composite [C3]; 59). Additionally, future studies on self-administered cognitive measures in clinical settings should explore optimal implementation paradigms and provider behavior patterns which would be valuable for informing public healthcare policy and efforts to support earlier diagnosis of cognitive disorders in older adults.
reference link :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7987552/
More information: Douglas W. Scharre et al, Self-Administered Gerocognitive Examination: longitudinal cohort testing for the early detection of dementia conversion, Alzheimer’s Research & Therapy (2021). DOI: 10.1186/s13195-021-00930-4