Alzheimer’s disease : A new deep learning model can accurately predict cognitive decline up to two years into the future


A new model developed at MIT can help predict if patients at risk for Alzheimer’s disease will experience clinically significant cognitive decline due to the disease, by predicting their cognition test scores up to two years in the future.

The model could be used to improve the selection of candidate drugs and participant cohorts for clinical trials, which have been notoriously unsuccessful thus far.

It would also let patients know they may experience rapid cognitive decline in the coming months and years, so they and their loved ones can prepare.

Pharmaceutical firms over the past two decades have injected hundreds of billions of dollars into Alzheimer’s research.

Yet the field has been plagued with failure: Between 1998 and 2017, there were 146 unsuccessful attempts to develop drugs to treat or prevent the disease, according to a 2018 report from the Pharmaceutical Research and Manufacturers of America.

In that time, only four new medicines were approved, and only to treat symptoms. More than 90 drug candidates are currently in development.

Studies suggest greater success in bringing drugs to market could come down to recruiting candidates who are in the disease’s early stages, before symptoms are evident, which is when treatment is most effective.

In a paper to be presented next week at the Machine Learning for Health Care conference, MIT Media Lab researchers describe a machine-learning model that can help clinicians zero in on that specific cohort of participants.

They first trained a “population” model on an entire dataset that included clinically significant cognitive test scores and other biometric data from Alzheimer’s patients, and also healthy individuals, collected between biannual doctor’s visits.

From the data, the model learns patterns that can help predict how the patients will score on cognitive tests taken between visits.

In new participants, a second model, personalized for each patient, continuously updates score predictions based on newly recorded data, such as information collected during the most recent visits.

Experiments indicate accurate predictions can be made looking ahead six, 12, 18, and 24 months.

Clinicians could thus use the model to help select at-risk participants for clinical trials, who are likely to demonstrate rapid cognitive decline, possibly even before other clinical symptoms emerge.

Treating such patients early on may help clinicians better track which antidementia medicines are and aren’t working.

“Accurate prediction of cognitive decline from six to 24 months is critical to designing clinical trials,” says Oggi Rudovic, a Media Lab researcher.

“Being able to accurately predict future cognitive changes can reduce the number of visits the participant has to make, which can be expensive and time-consuming.

Apart from helping develop a useful drug, the goal is to help reduce the costs of clinical trials to make them more affordable and done on larger scales.”

Joining Rudovic on the paper are: Yuria Utsumi, an undergraduate student, and Kelly Peterson, a graduate student, both in the Department of Electrical Engineering and Computer Science; Ricardo Guerrero and Daniel Rueckert, both of Imperial College London; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.

Population to personalization

For their work, the researchers leveraged the world’s largest Alzheimer’s disease clinical trial dataset, called Alzheimer’s Disease Neuroimaging Initiative (ADNI).

The dataset contains data from around 1,700 participants, with and without Alzheimer’s, recorded during semiannual doctor’s visits over 10 years.

Data includes their AD Assessment Scale-cognition sub-scale (ADAS-Cog13) scores, the most widely used cognitive metric for clinical trials of Alzheimer’s disease drugs.

The test assesses memory, language, and orientation on a scale of increasing severity up to 85 points.

The dataset also includes MRI scans, demographic and genetic information, and cerebrospinal fluid measurements.

In all, the researchers trained and tested their model on a sub-cohort of 100 participants, who made more than 10 visits and had less than 85 percent missing data, each with more than 600 computable features.

Of those participants, 48 were diagnosed with Alzheimer’s disease.

But data are sparse, with different combinations of features missing for most of the participants.

To tackle that, the researchers used the data to train a population model powered by a “nonparametric” probability framework, called Gaussian Processes (GPs), which has flexible parameters to fit various probability distributions and to process uncertainties in data.

This technique measures similarities between variables, such as patient data points, to predict a value for an unseen data point — such as a cognitive score.

The output also contains an estimate for how certain it is about the prediction.

The model works robustly even when analyzing datasets with missing values or lots of noise from different data-collecting formats.

But, in evaluating the model on new patients from a held-out portion of participants, the researchers found the model’s predictions weren’t as accurate as they could be. So, they personalized the population model for each new patient.

The system would then progressively fill in data gaps with each new patient visit and update the ADAS-Cog13 score prediction accordingly, by continuously updating the previously unknown distributions of the GPs.

After about four visits, the personalized models significantly reduced the error rate in predictions. It also outperformed various traditional machine-learning approaches used for clinical data.

Learning how to learn

But the researchers found the personalized models’ results were still suboptimal.

To fix that, they invented a novel “metalearning” scheme that learns to automatically choose which type of model, population or personalized, works best for any given participant at any given time, depending on the data being analyzed.

Metalearning has been used before for computer vision and machine translation tasks to learn new skills or adapt to new environments rapidly with a few training examples

But this is the first time it’s been applied to tracking cognitive decline of Alzheimer’s patients, where limited data is a main challenge, Rudovic says.

This shows a broken brain

A model developed at MIT predicts the cognitive decline of patients at risk for Alzheimer’s disease by forecasting their cognition test scores up to two years in the future, which could help zero in on the right patients to select for clinical trials.

The image is credited to Christine Daniloff, MIT.

The scheme essentially simulates how the different models perform on a given task — such as predicting an ADAS-Cog13 score — and learns the best fit.

During each visit of a new patient, the scheme assigns the appropriate model, based on the previous data. With patients with noisy, sparse data during early visits, for instance, population models make more accurate predictions. When patients start with more data or collect more through subsequent visits, however, personalized models perform better.

This helped reduce the error rate for predictions by a further 50 percent. “We couldn’t find a single model or fixed combination of models that could give us the best prediction,” Rudovic says. “So, we wanted to learn how to learn with this meta-learning scheme. It’s like a model on top of a model that acts as a selector, trained using metaknowledge to decide which model is better to deploy.”

Next, the researchers are hoping to partner with pharmaceutical firms to implement the model into real-world Alzheimer’s clinical trials. Rudovic says the model can also be generalized to predict various metrics for Alzheimer’s and other diseases.

Alzheimer’s Disease (AD) is a neurodegenerative disease characterized by progressive memory loss, cognitive impairment and general disability; AD is the most common cause of dementia of the Alzheimer’s type.

The progression of AD comprises a long, unnoticed preclinical stage, followed by a prodromal stage of Mild Cognitive Impairment (MCI) that leads to severe dementia and eventually death (1).

While no disease-modifying treatment is currently available for AD, a large number of drugs are in development and encouraging early-stage results from clinical trials provide for the first time a concrete hope that one or more therapies may become available in a few years (2).

As the progression of the neuropathology in AD starts years in advance before clinical symptoms of the disease become apparent and progressive neurodegeneration has irreversibly damaged the brain, emerging treatments will likely have the greatest effect when provided at the earliest disease stages.

Thus, the prompt identification of subjects at high risk for conversion to AD is of great importance.

The ability to identify declining individuals at the prodromal AD stage provides a critical time window for early clinical management, treatment & care planning and design of clinical drug trials (3).

Precise identification and early treatment of at risk subjects would stand to improve outcomes of clinical trials and reduce healthcare costs in clinical practice.

However, simulations also suggest that the health care system is not prepared to handle the potentially high volume of patients who would be eligible for treatment (2).

MCI represents (currently) the earliest clinically detectable stage of a potential ongoing progression toward AD or other dementias.

The cognitive decline in MCI is abnormal given an individual’s age and education level, but does not interfere with daily activities, and thus does not meet criteria for AD.

However, only 20–40% of individuals will progress to AD within 3 years, with a lower rate of conversion reported in epidemiologic samples than in clinical ones (45).

Currently, there are no means to provide patients diagnosed with MCI with an early prognosis for conversion to AD.

While changes in several biomarkers prior to developing AD have been reported, no single biomarker appears to adequately predict the conversion from MCI to AD with an acceptable level of accuracy.

As such, there is increasing evidence that the use of a combination of biomarkers can best predict the conversion to AD (369).

In the current age of big data and artificial intelligence technologies, considerable effort has been dedicated in developing machine learning algorithms that can predict the conversion to AD in subjects with MCI.

In almost all medical fields, the introduction into research and clinical practice of machine learning based decision-making tools, and more in general the shift toward a personalized medicine paradigm, is currently a debated topic and viewed as an opportunity to improve clinical outcomes.

Such objective tools may provide individual predictions with a certain degree of confidence based on information that can be collected about the subject, so that researchers and clinicians may be supported by these predictions in order to take better and more effective decisions (10).

So far, many studies focused on predicting the conversion of AD in MCI patients using different combinations of data including brain imaging, CSF biomarkers, genotyping, demographic and clinical information, and cognitive performance, achieving varying levels of accuracy [(71119); see (2021)] for a recent review of the most performing algorithms presented in the scientific literature so far).

However, while combining different biomarkers improves model accuracy, there is a lack of consistency regarding a specific combined AD prediction model and translation into practice is still lacking.

One possible reason for this is that current algorithms generally rely on expensive and/or invasive predictors, such as brain imaging or CSF biomarkers.

As such, these studies only serve the purpose of a proof-of-concept, without being further tested in independent and clinical samples.

The current study aimed to develop a clinically translatable machine learning algorithm to predict the conversion to AD in subjects with MCI within a 3-year period, based on fast, easy, and cost-effective predictors.

Specifically, we chose to develop a variety of machine learning algorithms based on distinct supervised machine learning techniques and subsets of the considered predictors, followed by a weighted average rank ensemble strategy on the predictions provided by the various algorithms to obtain a final, more accurate prediction.

Our hypothesis was that high predictive accuracy could be obtained using the above-mentioned approach with simple and non-invasive predictors. We used data obtained from the Alzheimer’s Disease Neuroimaging Initiative ( with a particular consideration for socio-demographic and clinical information, and neuropsychological test scores rather than using complex, invasive, and expensive imaging or CSF predictors.

Media Contacts: 
Anne Trafton – MIT
Image Source:
The image is credited to Christine Daniloff, MIT.

Original Research: The findings will be presented at the Machine Learning for Health Care conference.

An open access preview of the research is available here.


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.