Researhcers created a open source multitapered spectrogram for electroencephalogram (EEG) analyses

0
1143

Researchers at UT have developed a free open source computer program that can be used to create visual and quantitative representations of brain electrical activity in laboratory animals in hopes of developing countermeasures for opioid use disorder.

The program is described in a paper published in JoVE. Lead author Christopher O’Brien is a UT graduate who manages the research laboratory of Helen Baghdoyan and Ralph Lydic, both co-authors on the paper and professors in UT’s Department of Psychology and the Graduate School of Medicine’s Department of Anesthesiology.

In the paper, the researchers describe the steps they took to create a multitapered spectrogram for electroencephalogram (EEG) analyses with an accessible and user-friendly code.

They validated the program through analyses of EEG spectrograms of mice that had received different opioid treatments.

This shows a woman with an EEG cap on

They validated the program through analyses of EEG spectrograms of mice that had received different opioid treatments. The image is credited to University of Tennessee at Knoxville.

“There is a misconception that opioids promote sleep, but in quantitative studies of states of sleep and wakefulness using electroencephalographic recordings of brain waves, opiates are shown to disrupt sleep,” Lydic said. “Additionally, drug addiction studies show that abnormal sleep is associated with increased likelihood of addiction relapse.”

Note: Download the compiled Multitaper Spectrogram Program


Deep Learning in the Spectrogram Representation

Our goal here will be to train a network to classify subjects from the EEG spectrograms recorded at baseline in binary problems, with classification labels such as HC (healthy control), PD (idiopathic RBD who will later convert to PD), etc.

Here we explore first a deep learning approach inspired by recent successes in image classification using deep convolutional neural networks designed to exploit invariances and capture compositional features in the data [see e.g., (91112)].

These systems have been largely developed to deal with image data, i.e., 2D arrays, possibly from different channels, or audio data [as in van den Oord et al. (16)], and, more recently, with EEG data as well (1517).

Thus, inputs to such networks are data cubes (multichannel stacked images). In the same vein, we aimed to work here with the spectrograms of EEG channel data, i.e., 2D time-frequency maps.

Such representations represent spectral dynamics as essentially images with the equivalent of image depth provided by multiple available EEG channels (or, e.g., current source density maps or cortically mapped quantities from different spatial locations). Using such representation, we avoid the need to select frequency bands or channels in the process of feature selection. This approach essentially treats EEG channel data as an audio file, and our approach mimics similar uses of deep networks in that domain.

RNNs can also be used to classify images, e.g., using image pixel rows as time series. This is particularly appropriate in the case of the data in this study, given the good performance we obtained using ESNs on temporal spectral data Ruffini et al. (14).

We study here also the use of stacked architectures of long-short term memory network (LSTM) or gated-recurrent unit (GRU) cells, which have shown good representational power and can be trained using backpropagation (121819).

Our general assumption is that some relevant aspects in EEG data from our datasets are contained in compositional features embedded in the time-frequency representation.

This assumption is not unique to our particular classification domain, but should hold of EEG in general. In particular, we expect that deep networks may be able to efficiently learn to identify features in the time-frequency domain associated to bursting events across frequency bands that may help separate classes, as in “bump analysis” (20).

Bursting events are hypothesized to be representative of transient synchrony of neural populations, which are known to be affected in neurodegenerative diseases such as Parkinson’s or Alzheimer’s disease (21).

Finally, we note that in this study we have made no attempt to fully-optimize the network architecture. In particular, no fine-tuning of hyper-parameters has been carried out using a validation set approach, something we leave for future work with larger datasets.

Our aim has been to implement a proof of concept of the idea that deep learning approaches can provide value for classification and analysis of time-frequency representations of EEG data—while possibly providing new physiological insights.


Source:
University of Tennessee at Knoxville
Media Contacts:
Karen Dunlap – University of Tennessee at Knoxville
Image Source:
The image is credited to University of Tennessee at Knoxville.

Original Research: Open access
“Computer-based Multitaper Spectrogram Program for Electroencephalographic Data”. Christopher B. O’Brien, Helen A. Baghdoyan, Ralph Lydic.
JOVE doi:10.3791/60333.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.