A new deep learning algorithm can distinguish different sleep stages

0
2178

A new deep learning model developed by researchers at the University of Eastern Finland can identify sleep stages as accurately as an experienced physician.

This opens up new avenues for the diagnostics and treatment of sleep disorders, including obstructive sleep apnea.

Obstructive sleep apnea (OSA) is a nocturnal breathing disorder that causes a major burden on public health care systems and national economies.

It is estimated that up to one billion people worldwide suffer from obstructive sleep apnea, and the number is expected to grow due to population ageing and increased prevalence of obesity.

When untreated, OSA increases the risk of cardiovascular diseases and diabetes, among other severe health consequences.

The identification of sleep stages is essential in the diagnostics of sleep disorders, including obstructive sleep apnea.

Traditionally, sleep is manually classified into five stages, which are wake, rapid eye movement (REM) sleep and three stages of non-REM sleep. However, manual scoring of sleep stages is time-consuming, subjective and costly.

To overcome these challenges, researchers at the University of Eastern Finland used polysomnographic recording data from healthy individuals and individuals with suspected OSA to develop an accurate deep learning model for automatic classification of sleep stages.

In addition, they wanted to find out how the severity of OSA affects classification accuracy.

This shows a woman sleeping


Modern sleep diagnostics is based on wearable, non-intrusive methods. Image is credited to Juha Rutanen.

In healthy individuals, the model was able to identify sleep stages with an 83.7% accuracy when using a single frontal electroencephalography channel (EEG), and with an 83.9% accuracy when supplemented with electrooculogram (EOG). In patients with suspected OSA, the model achieved accuracies of 82.9% (single EEG channel) and 83.8% (EEG and EOG channels).

The single-channel accuracies ranged from 84.5% for individuals without OSA to 76.5% for severe OSA patients.

The accuracies achieved by the model are equivalent to the correspondence between experienced physicians performing manual sleep scoring.

However, the model has the benefit of being systematic and always following the same protocol, and conducting the scoring in a matter of seconds.

According to the researchers, deep learning enables automatic sleep staging for suspected OSA patients with a high accuracy. The study was published in IEEE Journal of Biomedical and Health Informatics.

The Sleep Technology and Analytics Group, STAG, at the University of Eastern Finland solves sleep diagnostics challenges by using a variety of different approaches.

The methods developed by the group are based on wearable, non-intrusive sensors, better diagnostic parameters and modern computational solutions that are based on artificial intelligence.

The new methods developed by the group are expected to significantly improve OSA severity assessment, promote individualised treatment planning and more reliable prediction of OSA-related daytime symptoms and comorbidities.


Sleep disorders are widespread in most of the population, and may lead to serious health problems affecting the quality of life [1].

Insomnia, hypersomnias, parasomnias, sleep-related breathing, narcolepsy, circadian rhythm disorders, and sleep-related movement disorders are the common health problems that are created due to sleep disorders.

Although many of these disorders can be diagnosed clinically, some of them should be analyzed using advanced techniques in the laboratory environment [1,2].

Polysomnogram (PSG) recordings of subjects are the physiological signals that are collected during an entire night of sleep.

The PSG is a multivariate system consisting of signal recordings such as electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG), and electromyogram (EMG) [3].

After the recordings, sleep stage scoring is performed on PSG records. This process is manually carried out by the sleep experts who score and grade the sleep stages [4].

These experts visually evaluate the PSG signals for a specific time frame, and then determine the scores according to various criteria.

The main criteria for this process are based on the guidelines that were first proposed by Rechtschaffen and Kales (R&K) [5], and later developed by the American Academy of Sleep Medicine (AASM) [6].

According to the rules of R&K, a sleep stage can be classified as wake (W), four non-rapid eye movement (NREM) stages (S1–S4), and rapid eye movement (REM).

According to the AASM guidelines, the S3 and S4 stages are represented by a single class as slow-wave sleep (SWS).

The wake sleep stage is defined as the class of awakening of the subject before the sleep.

NREM S1 is the first stage of a sleep where the brain activity slows down, and muscles are relaxed.

Stage S2 is the stage where the actual sleep phase begins, and the eye movements stop in this stage.

Stage S3 is called deep sleep, because the brain function of the subject is significantly reduced.

Deep sleep activity continues in the NREM S4 sleep stage.

Eyes are closed in the REM stage, but they also move rapidly [7].

The visual inspection of PSG signals and manual determination of sleep stages is a complex, costly, and problematic process that requires expertise [8,9].

Besides, it is visually hard to detect EEG signal variations due to their random and chaotic nature [10].

For this reason, automated detection and recognition systems are developed to assist the experts. The most commonly used PSG signal for sleep-stage classification is the EEG data of one or more channels. Usage of the EEG signal is mostly preferred because EEG signals can be easily obtained with wearable technologies, and they consist of useful information as well [10,11].

Wearable technologies are an important technological advancement because the usage of this technology helps monitor the sleep data of subjects comfortably in their home environment [12].

During EEG signal processing, feature extraction, feature selection, and classification [13] steps are commonly used. Time, frequency, time-frequency domain-based transformations, and non-linear feature extraction methods are employed by various researchers at the feature extraction stage of EEG signals [14,15,16].

Due to the characteristic features of these signals, more advanced signal-processing techniques and complex machine learning algorithms are preferred instead of time and frequency domain approaches [7,10,17,18,19].

However, all of these approaches are mostly based on the use of shallow classifiers on the features obtained from one or more handcrafted feature extraction/selection processes.

Acharya et al. [20] have performed the automatic identification of sleep stages with a Gaussian mixture model classifier using high-order spectra (HOS)-based features for two channels of EEG data.

For the feature extraction stage, Sharma et al. [7] employed a novel three-band time-frequency localized wavelet filter bank, and then the extracted features were given as input to the support vector machine (SVM) classifier for the automated recognition of sleep-stages. Hassan et al. [21] first decomposed the EEG signals using ensemble empirical mode decomposition (EEMD), and then extracted several statistical properties from the data.

For this purpose, they proposed a classifier called random undersampling boosting (RUSBoost), which can automatically score sleep with the obtained features. Zhu et al. [22] performed the sleep-stage recognition task with 14963 EEG segments using a graph domain-based approach.

They mapped EEG signals into a visibility graph (VG) and a horizontal visibility graph (HVG). Rahman et al. [23] preferred discrete wavelet transform (DWT) for the feature extraction on single EOG signals, and they claimed the superiority of EOG signals over EEG signals in the classification of sleep stages.

Tsinalis et al. [12] obtained sleep stage-specific signal characteristics using time-frequency-based feature extraction, and achieved an average accuracy of 86% on EEG data of 20 healthy young adults.

Bajaj et al. [24] proposed an EEG-based technique that used time-frequency images (TFIs). Their method can automatically classify the data into sleep stages by using the least-square SVM classifier and the features from the histograms of segmented TFIs.

Huang et al. [9] used spectral feature extraction from two foreheads (FP1 and FP2) of EEG signals by using short-time fast Fourier transform and manual scoring knowledge.

They also classified sleep stages with these features by using the relevant vector machine classification technique. Nakamura et al. [25] employed a multi-class SVM to classify the features derived from EEG by using multi-scale fuzzy entropy (MSFE) and multi-scale permutation entropy (MSPE) features. Similarly, Rodriguze-Sotelo et al. [4] used entropy-based features with an unsupervised classifier. Acharya et al. [26] proposed a solution for the recognition of six stages of sleep using non-linear parameters.

Fell et al. [27] used a variety of spectral and non-linear measurements from EEG signals for the discrimination of sleep stages.

They reported that the combinations of these measurements would produce better results than the previous studies, as indicated in the literature. In another study, Alickovic and Subasi [3] proposed a three-module structure for the same problem.

In the first module of their solution, the signals obtained from the Pz–Oz channel were de-noised using multi-scale principal component analysis (PCA). In the second module, feature extraction was performed by using statistical methods on the signals separated to sub-bands by the DWT.

Finally, in the third module, rotational SVM was used to classify the data into five-stage sleep data with an accuracy of 91.1%.

Imtiaz et al. [28] suggested a small decision tree (DT) driven by a class machine for the automated scoring of sleep stages. They reported 82% and 79% accuracy rates during training and testing, respectively.

Silveria et al. [29] applied the DWT method on EEG signals and performed sleep-stage classification using the random forest (RF) classifier on kurtosis, skewness, and variances. Şen et al. [15] collected 41 attributes under four categories for the feature-extraction stage, and then used a variety of feature selection methods to select the useful features from these collected attributes.

Memar and Faradji [30] also proposed a system for the classification of the wake and sleep stages.

During the feature-extraction stage, they decomposed each EEG sample into eight sub-bands with different frequency contents, and then classified the extracted features using the random forest classifier. Yulita et al. [31] used a fast-convolutional method-based feature learning and softmax classifier for automatic sleep stage classification. Vural et al. [32] constructed an EEG-based classification structure using principal component analysis (PCA).

In the given state-of-the-art for the sleep stage classification methods, all of the feature extraction, selection, and classification tasks are performed on the data as separate processes. Recent developments in the field in the machine learning area have led to the emergence of end-to-end deep structures with the capability to perform these separated tasks together in a more efficient way [33,34,35,36].

Deep learning methods have already demonstrated their success in various research areas such as image recognition, sound processing, and natural language processing. Accordingly, deep models already have a widespread application in the biomedical area.

There has been a notable increase in the use of deep learning approaches for the evaluation of biomedical signals (EEG, ECG, EMG, and EOG) [37]. Deep learning methodologies were employed on many challenging topics such as computer-based evaluations of ECG data for heart diseases [38,39,40,41] and the detection of neurological disorders using EEG signals [42,43,44,45].

There are also few studies in the literature where deep learning models have been used for the sleep stage classification. Supratak et al. [46] conducted a study on DeepSleepNet by combining a convolutional neural network (CNN) and bidirectional long short- term memory (BLSTM) for the sleep stage classification. DeepSleepNet contains a CNN and BLSTM as consecutive steps.

The learning process was completed in the CNN part, and a sequence of residual learning was realized in the BLSTM part. Tsinalis et al. [12] categorized more than 20 healthy subject’s data by using a CNN model on single EEG channel data.

They achieved an average performance of 74% accuracy for five-stage sleep classification. Tripathy and Acharya [47] classified sleep stages by using an RR-time series and EEG signals with a deep neural network (DNN). Chambon et al. [48] proposed an 11-layer two-dimensional (2D) CNN model for sleep stage classification. In this model, EEG/EOG and EMG PSG signals are used as the input.

They reported that the usage of limited EEG channels (six channels) on their model showed similar performances with the use of 20 channels of EEG data. Michielli et al. [49] proposed a cascaded LSTM architecture for automated sleep stage classification using single-channel EEG signals.

In this study, a new deep learning model based on a one-dimensional convolutional neural network (1D-CNN) is proposed for automated sleep stage classification.

This model helps construct an end-to-end structure where no handcrafted feature is used for sleep stage recognition with raw PSG signals.

One of the most important contributions of the study is that the proposed deep model can be used without changing any of its layer parameters for two to six sleep classes and other types of PSG signals. Hence, our model is flexible and developed using two popular sleep databases that are available in the literature.


Source:
University of Eastern Finland

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.