Researchers from MIT and elsewhere have developed a system that measures a patient’s pain level by analyzing brain activity from a portable neuroimaging device.
The system could help doctors diagnose and treat pain in unconscious and noncommunicative patients, which could reduce the risk of chronic pain that can occur after surgery.
Pain management is a surprisingly challenging, complex balancing act.
Overtreating pain, for example, runs the risk of addicting patients to pain medication.
Today, doctors generally gauge pain levels according to their patients’ own reports of how they’re feeling.
But what about patients who can’t communicate how they’re feeling effectively – or at all – such as children, elderly patients with dementia, or those undergoing surgery?
In a paper presented at the International Conference on Affective Computing and Intelligent Interaction, the researchers describe a method to quantify pain in patients.
To do so, they leverage an emerging neuroimaging technique called functional near infrared spectroscopy (fNIRS), in which sensors placed around the head measure oxygenated hemoglobin concentrations that indicate neuron activity.
Innovators at NASA’s Glenn Research Center have developed a Functional Near-Infrared Spectroscopy (fNIRS) Cognitive Brain Monitor with improved signal processing to obtain more accurate data. fNIRS has been used successfully to monitor cognitive states and activity, and Glenn’s system can be used to continuously monitor brain function during safety-critical tasks, such as flying an airplane or driving a train.
Using head-worn sensors, the technique employs near-infrared light and advanced signal processing to allow real-time, in-task monitoring.
The system not only determines changes in cognitive state by tracking blood hemoglobin levels in the brain, but also filters non-relevant artifacts, such as the probes’ own motion, rendering the collected data even more accurate. Glenn’s novel use and refinement of fNIRS signals stands to improve safety in a wide variety of applications and environments.Benefits
- Improved safety: Continuous monitoring of brain activity during safety-critical tasks could prevent serious accidents
- High accuracy: Removing motion artifacts allows real-world data capture to approach laboratory quality
- Portability: The system features comfortable head-worn sensors, and is compact enough to fit into smaller spaces
- Safety simulations, training, and monitoring for airline pilots, train and mass transit engineers, ship captains, truck drivers, crane and other heavy-equipment operators, and air traffic controllers
- Military simulations and training
- In-home, real-time monitoring and feedback during patient rehabilitation for cognitive impairment or depression
- Replacement for or supplement to functional brain imaging
|Glenn’s system can be used to continuously monitor brain function during safety-critical tasks, such as flying an airplane|
Functional near-infrared spectroscopy (fNIRS) is an emerging hemodynamic neuroimaging brain-computer interface (BCI) technology that indirectly measures neuronal activity in the brain’s cortex via neuro-vascular coupling.
fNIRS works by quantifying hemoglobin-concentration changes in the brain based on optical intensity measurements, measuring the same hemodynamic changes as functional magnetic resonance imaging (fMRI).
With enough probes in enough locations, fNIRS can detect these hemodynamic activations across the subject’s entire head, thus allowing the determination of cognitive state through the use of pattern classification. fNIRS systems offer low-power, low-cost, highly mobile alternatives for real-time monitoring in safety-critical situations.
Glenn’s specific contribution to this field is the algorithms capable of removing motion artifacts (environment- or equipment-induced errors) from the device’s head-worn optical sensors.
In other words, Glenn’s adaptive filter can determine the presence of a potential motion artifact based on a phase shift in the data measured; identify the artifact by examining the correlation between the phase shift and changes in hemoglobin concentration; and finally remove the artifact using Kalman filtering whenever changes in hemoglobin level and changes in the phase shift are not correlated. Glenn’s breakthrough allows the advantages of fNIRS to be used for non-invasive real-time brain monitoring applications in motion-filled environments that could potentially save lives.
For their work, the researchers use only a few fNIRS sensors on a patient’s forehead to measure activity in the prefrontal cortex, which plays a major role in pain processing.
Using the measured brain signals, the researchers developed personalized machine-learning models to detect patterns of oxygenated hemoglobin levels associated with pain responses.
When the sensors are in place, the models can detect whether a patient is experiencing pain with around 87 percent accuracy.
“The way we measure pain hasn’t changed over the years,” says Daniel Lopez-Martinez, a Ph.D. student in the Harvard-MIT Program in Health Sciences and Technology and a researcher at the MIT Media Lab.
“If we don’t have metrics for how much pain someone experiences, treating pain and running clinical trials becomes challenging.
The motivation is to quantify pain in an objective manner that doesn’t require the cooperation of the patient, such as when a patient is unconscious during surgery.”
Traditionally, surgery patients receive anesthesia and medication based on their age, weight, previous diseases, and other factors.
If they don’t move and their heart rate remains stable, they’re considered fine.
But the brain may still be processing pain signals while they’re unconscious, which can lead to increased postoperative pain and long-term chronic pain.
The researchers’ system could provide surgeons with real-time information about an unconscious patient’s pain levels, so they can adjust anesthesia and medication dosages accordingly to stop those pain signals.
Joining Lopez-Martinez on the paper are: Ke Peng of Harvard Medical School, Boston Children’s Hospital, and the CHUM Research Centre in Montreal; Arielle Lee and David Borsook, both of Harvard Medical School, Boston Children’s Hospital, and Massachusetts General Hospital; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.
Focusing on the forehead
In their work, the researchers adapted the fNIRS system and developed new machine-learning techniques to make the system more accurate and practical for clinical use.
To use fNIRS, sensors are traditionally placed all around a patient’s head. Different wavelengths of near-infrared light shine through the skull and into the brain.
Oxygenated and deoxygenated hemoglobin absorb the wavelengths differently, altering their signals slightly. When the infrared signals reflect back to the sensors, signal-processing techniques use the altered signals to calculate how much of each hemoglobin type is present in different regions of the brain.
When a patient is hurt, regions of the brain associated with pain will see a sharp rise in oxygenated hemoglobin and decreases in deoxygenated hemoglobin, and these changes can be detected through fNIRS monitoring. But traditional fNIRS systems place sensors all around the patient’s head.
This can take a long time to set up, and it can be difficult for patients who must lie down. It also isn’t really feasible for patients undergoing surgery.
Therefore, the researchers adapted the fNIRS system to specifically measure signals only from the prefrontal cortex. While pain processing involves outputs of information from multiple regions of the brain, studies have shown the prefrontal cortex integrates all that information. This means they need to place sensors only over the forehead.
Another problem with traditional fNIRS systems is they capture some signals from the skull and skin that contribute to noise. To fix that, the researchers installed additional sensors to capture and filter out those signals.
Personalized pain modeling
On the machine-learning side, the researchers trained and tested a model on a labeled pain-processing dataset they collected from 43 male participants.
(Next they plan to collect a lot more data from diverse patient populations, including female patients—both during surgery and while conscious, and at a range of pain intensities—in order to better evaluate the accuracy of the system.)
Each participant wore the researchers’ fNIRS device and was randomly exposed to an innocuous sensation and then about a dozen shocks to their thumb at two different pain intensities, measured on a scale of 1-10: a low level (about a 3/10) or high level (about 7/10).
Those two intensities were determined with pretests: The participants self-reported the low level as being only strongly aware of the shock without pain, and the high level as the maximum pain they could tolerate.
In training, the model extracted dozens of features from the signals related to how much oxygenated and deoxygenated hemoglobin was present, as well as how quickly the oxygenated hemoglobin levels rose.
Those two metrics – quantity and speed – give a clearer picture of a patient’s experience of pain at the different intensities.
Importantly, the model also automatically generates “personalized” submodels that extract high-resolution features from individual patient subpopulations.
Traditionally, in machine learning, one model learns classifications – “pain” or “no pain” – based on average responses of the entire patient population. But that generalized approach can reduce accuracy, especially with diverse patient populations.
The researchers’ model instead trains on the entire population but simultaneously identifies shared characteristics among subpopulations within the larger dataset.
For example, pain responses to the two intensities may differ between young and old patients, or depending on gender.
This generates learned submodels that break off and learn, in parallel, patterns of their patient subpopulations.
At the same time, however, they’re all still sharing information and learning patterns shared across the entire population.
In short, they’re simultaneously leveraging fine-grained personalized information and population-level information to train better.
The personalized models and a traditional model were evaluated in classifying pain or no-pain in a random hold-out set of participant brain signals from the dataset, where the self-reported pain scores were known for each participant.
The personalized models outperformed the traditional model by about 20 percent, reaching about 87 percent accuracy.
“Because we are able to detect pain with this high accuracy, using only a few sensors on the forehead, we have a solid basis for bringing this technology to a real-world clinical setting,” Lopez-Martinez says.
Provided by Massachusetts Institute of Technology