New brain-machine interface headset allow those with disabilities to maneuver their wheelchair with thought alone

0
83

Combining new classes of nanomembrane electrodes with flexible electronics and a deep learning algorithm could help disabled people wirelessly control an electric wheelchair, interact with a computer or operate a small robotic vehicle without donning a bulky hair-electrode cap or contending with wires.

By providing a fully portable, wireless brain-machine interface (BMI), the wearable system could offer an improvement over conventional electroencephalography (EEG) for measuring signals from visually evoked potentials in the human brain.

The system’s ability to measure EEG signals for BMI has been evaluated with six human subjects, but has not been studied with disabled individuals.

The project, conducted by researchers from the Georgia Institute of Technology, University of Kent and Wichita State University, was reported on September 11 in the journal Nature Machine Intelligence.

“This work reports fundamental strategies to design an ergonomic, portable EEG system for a broad range of assistive devices, smart home systems and neuro-gaming interfaces,” said Woon-Hong Yeo, an assistant professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering and Wallace H. Coulter Department of Biomedical Engineering. “The primary innovation is in the development of a fully integrated package of high-resolution EEG monitoring systems and circuits within a miniaturized skin-conformal system.”

BMI is an essential part of rehabilitation technology that allows those with amyotrophic lateral sclerosis (ALS), chronic stroke or other severe motor disabilities to control prosthetic systems.

Gathering brain signals known as steady-state virtually evoked potentials (SSVEP) now requires use of an electrode-studded hair cap that uses wet electrodes, adhesives and wires to connect with computer equipment that interprets the signals.

Yeo and his collaborators are taking advantage of a new class of flexible, wireless sensors and electronics that can be easily applied to the skin.

The system includes three primary components: highly flexible, hair-mounted electrodes that make direct contact with the scalp through hair; an ultrathin nanomembrane electrode; and soft, flexible circuity with a Bluetooth telemetry unit.

The recorded EEG data from the brain is processed in the flexible circuitry, then wirelessly delivered to a tablet computer via Bluetooth from up to 15 meters away.

Beyond the sensing requirements, detecting and analyzing SSVEP signals have been challenging because of the low signal amplitude, which is in the range of tens of micro-volts, similar to electrical noise in the body.

Researchers also must deal with variation in human brains. Yet accurately measuring the signals is essential to determining what the user wants the system to do.

To address those challenges, the research team turned to deep learning neural network algorithms running on the flexible circuitry.

“Deep learning methods, commonly used to classify pictures of everyday things such as cats and dogs, are used to analyze the EEG signals,” said Chee Siang (Jim) Ang, senior lecturer in Multimedia/Digital Systems at the University of Kent.

“Like pictures of a dog which can have a lot of variations, EEG signals have the same challenge of high variability.

Deep learning methods have proven to work well with pictures, and we show that they work very well with EEG signals as well.”

In addition, the researchers used deep learning models to identify which electrodes are the most useful for gathering information to classify EEG signals. “We found that the model is able to identify the relevant locations in the brain for BMI, which is in agreement with human experts,” Ang added. “This reduces the number of sensors we need, cutting cost and improving portability.”

The system uses three elastomeric scalp electrodes held onto the head with a fabric band, ultrathin wireless electronics conformed to the neck, and a skin-like printed electrode placed on the skin below an ear.

The dry soft electrodes adhere to the skin and do not use adhesive or gel. Along with ease of use, the system could reduce noise and interference and provide higher data transmission rates compared to existing systems.

The system was evaluated with six human subjects.

The deep learning algorithm with real-time data classification could control an electric wheelchair and a small robotic vehicle. The signals could also be used to control a display system without using a keyboard, joystick or other controller, Yeo said.

“Typical EEG systems must cover the majority of the scalp to get signals, but potential users may be sensitive about wearing them,” Yeo added. “This miniaturized, wearable soft device is fully integrated and designed to be comfortable for long-term use.”

Next steps will include improving the electrodes and making the system more useful for motor-impaired individuals.

This shows a person wearing the headset

Test subject who has flexible wireless electronics conformed to the back of the neck, with dry hair electrodes under a fabric headband and a membrane electrode on the mastoid, connected with thin-film cables. The image is credited to Woon-Hong Yeo.

“Future study would focus on investigation of fully elastomeric, wireless self-adhesive electrodes that can be mounted on the hairy scalp without any support from headgear, along with further miniaturization of the electronics to incorporate more electrodes for use with other studies,” Yeo said.

“The EEG system can also be reconfigured to monitor motor-evoked potentials or motor imagination for motor-impaired subjects, which will be further studied as a future work on therapeutic applications.”

Long-term, the system may have potential for other applications where simpler EEG monitoring would be helpful, such as in sleep studies done by Audrey Duarte, an associate professor in Georgia Tech’s School of Psychology.

“This EEG monitoring system has the potential to finally allow scientists to monitor human neural activity in a relatively unobtrusive way as subjects go about their lives,” she said. “For example, Dr. Yeo and I are currently using a similar system to monitor neural activity while people sleep in the comfort of their own homes, rather than the lab with bulky, rigid, uncomfortable equipment, as is customarily done.

Measuring sleep-related neural activity with an imperceptible system may allow us to identify new, non-invasive biomarkers of Alzheimer’s-related neural pathology predictive of dementia.”

In addition to those already mentioned, the research team included Musa Mahmood, Yun-Soung Kim, Saswat Mishra, and Robert Herbert from Georgia Tech; Deogratias Mzurikwao from the University of Kent; and Yongkuk Lee from Wichita State University.

Funding: This research was supported by a grant from the Fundamental Research Program (project PNK5061) of Korea Institute of Materials Science, funding by the Nano-Material Technology Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (no. 2016M3A7B4900044), and support from the Institute for Electronics and Nanotechnology, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-1542174).


Over recent years, brain-computer interface (BCI) has emerged as an alternative communication system between the human brain and an output device. Deciphered intents, after detecting electrical signals from the human scalp, are translated into control commands used to operate external devices, computer displays and virtual objects in the real-time. BCI provides an augmentative communication by creating a muscle-free channel between the brain and the output devices, primarily for subjects having neuromotor disorders, or trauma to nervous system, notably spinal cord injuries (SCI), and subjects with unaffected sensorimotor functions but disarticulated or amputated residual limbs.

This review identifies the potentials of electroencephalography (EEG) based BCI applications for locomotion and mobility rehabilitation. Patients could benefit from its advancements such as wearable lower-limb (LL) exoskeletons, orthosis, prosthesis, wheelchairs, and assistive-robot devices.

The EEG communication signals employed by the aforementioned applications that also provide feasibility for future development in the field are sensorimotor rhythms (SMR), event-related potentials (ERP) and visual evoked potentials (VEP). The review is an effort to progress the development of user’s mental task related to LL for BCI reliability and confidence measures. As a novel contribution, the reviewed BCI control paradigms for wearable LL and assistive-robots are presented by a general control framework fitting in hierarchical layers.

It reflects informatic interactions, between the user, the BCI operator, the shared controller, the robotic device and the environment. Each sub layer of the BCI operator is discussed in detail, highlighting the feature extraction, classification and execution methods employed by the various systems. All applications’ key features and their interaction with the environment are reviewed for the EEG-based activity mode recognition, and presented in form of a table.

It is suggested to structure EEG-BCI controlled LL assistive devices within the presented framework, for future generation of intent-based multifunctional controllers.

Despite the development of controllers, for BCI-based wearable or assistive devices that can seamlessly integrate user intent, practical challenges associated with such systems exist and have been discerned, which can be constructive for future developments in the field.

Introduction

The field of assistive technologies, for mobility rehabilitation, is ameliorating by the introduction of electrophysiological signals to control these devices.

The system runs independent of physical, or muscular interventions, using brain signals that reflect user’s intent to control devices/limbs (Millán et al., 2010Lebedev and Nicolelis, 2017), called brain-computer interface (BCI).

Commonly used non-invasive modality to record brain signals is electroencephalography (EEG).

EEG signals are deciphered to control commands in order to restore communication between the brain and the output device when the natural communication channel i.e., neuronal activity is disrupted. Recent reviews on EEG-BCI for communication and rehabilitation of lower-limbs (LL) could be found in (Cervera et al., 2018Deng et al., 2018He et al., 2018aLazarou et al., 2018Semprini et al., 2018Slutzky, 2018).

About five decades ago, EEG-BCIs used computer cursor movements to communicate user intents for patient-assistance in various applications (Vidal, 1973Wolpaw et al., 2002Lebedev and Nicolelis, 2017).

The applications are now widespread, as machine learning has become one essential component of BCI, functional in different fields of neurorobotics and neuroprosthesis. For lower extremity, applications include human locomotion assistance, gait rehabilitation, and enhancement of physical abilities of able-bodied humans (Deng et al., 2018).

Devices for locomotion, or mobility assistance, vary from wearable to (non-wearable) assistive-robot devices. Wearable devices such as exoskeletons, orthosis, prosthesis, and assistive-robot devices including wheelchairs, guiding humanoids, telepresence and mobile robots for navigation are the focus of our investigation.

Control schemes, offered by these systems, rely on the inputs derived from electrophysiological signals, electromechanical sensors from the device, and the deployment of finite state controller that attempts to implicate user’s motion intention, to generate correct walking trajectories with wearable robots (Duvinage et al., 2012Jimenez-Fabian and Verlinden, 2012Herr et al., 2013Contreras-Vidal et al., 2016).

Input signals are typically extracted from the residual limb/muscles i.e., amputated or disarticulated lower-limbs (LL), via electromyography (EMG), from users with no cortical lesion or intact cognitive functions. Such solutions consequently preclude patient groups whose injuries necessitate direct cortical input to the BCI controller, for instance users with neuromotor disorders such as spinal cord injury (SCI) and stroke, or inactive efferent nerves/synergistic muscle groups.

In this case direct cortical inputs from EEG could be the central-pattern-generators (CPG) that generate basic motor patterns at the supraspinal or cortical level (premotor and motor cortex); or the LL kinesthetic motor imagery (KMI) signals (Malouin and Richards, 2010).

The realization of BCI controllers solely driven by EEG signals, for controlling LL wearable/assistive devices, is therefore possible (Lee et al., 2017). Several investigations reinstate that CPG with less supraspinal control is involved in the control of bipedal locomotion (Dimitrijevic et al., 1998Beloozerova et al., 2003Tucker et al., 2015).

This provides the basis for the development of controllers, directly driven from cortical activity in correlation to the user intent for volitional movements (Nicolas-Alonso and Gomez-Gil, 2012Angeli et al., 2014Tucker et al., 2015Lebedev and Nicolelis, 2017) instead of EMG signals. Consequently, controllers with EEG-based activity mode recognition for portable assistive devices, have become an alternative to get seamless results (Presacco et al., 2011b).

However, when employing EEG signals as input to the BCI controller, there necessitates a validation about the notion that EEG signals from the cortex can be useful for the locomotion control.

Though cortical sites encode movement intents, the kinetic and kinematic changes necessary to execute the intended movement, are essential factors to be considered. Studies indicate that the selective recruitment of embedded “muscle synergies” provide an efficient means of intent-driven, selective movement, i.e., these synergies, stored as CPGs, specify spatial organization of muscle activation and characterize different biomechanical subtasks (Chvatal et al., 2011Chvatal and Ting, 2013).

According to Maguire et al. (2018), during human walking, Chvatal and Ting (2012) identified different muscle synergies for the control of muscle activity and coordination. According to Petersen et al. (2012), the swing-phase was more influenced by the central cortical control, i.e., dorsiflexion in early stance at heel strike, and during pre-swing and swing phases for energy transfer from trunk to leg.

They also emphasized the importance of cortical activity during steady unperturbed gait for the support of CPG activity. Descending cortical signals communicate with spinal networks to ensure that accurate changes in limb movement have appropriately integrated into the gait pattern (Armstrong, 1988).

The subpopulations of motor-cortical neurons activate sequentially amid the step cycle particularly during the initiation of pre-swing and swing (Drew et al., 2008).

The importance of cortical activation upon motor imagery (MI) of locomotor tasks has been reported in Malouin et al. (2003) and Pfurtscheller et al. (2006b). Similarly, the confirmation of electrocortical activity coupled to gait cycle, during treadmill walking or LL control, for applications as EEG-BCI exoskeletons and orthotic devices, has been discerned by (He et al., 2018bGwin et al. (20102011)Wieser et al. (2010)Presacco et al. (2011a)Presacco et al. (2011b)Chéron et al. (2012)Bulea et al. (2013)Bulea et al. (2015)Jain et al. (2013)Petrofsky and Khowailed (2014)Kumar et al. (2015), and Liu et al. (2015). This provides the rationale for BCI controllers that incorporate cortical signals for high-level commands, based on user intent to walk/bipedal locomotion or kinesthetic motor imagery of LL.

While BCIs may not require any voluntary muscle control, they are certainly dependent on brain response functions therefore the choice of BCI depends on the user’s sensorimotor lesion and adaptability.

Non-invasive types of BCI depend on EEG signals used for communication, which elicit under specific experimental protocols. Deployed electrophysiological signals that we investigate, include oscillatory/sensorimotor rhythms (SMR), elicited upon walking intent, MI or motor execution (ME) of a task, and evoked potentials as event-related potentials (ERP/P300) and visual evoked potentials (VEP). Such BCI functions as a bridge to bring sensory input into the brain, bypassing damages sight, listening or sensing abilities.

Figure 1 shows a schematic description of a BCI system based on MI, adapted from He et al. (2015). The user performs MI of limb(s), which is encoded in EEG reading; features representing the task are deciphered, processed and translated to commands in order to control assistive-robot device.

Reviewed control schemes deployed by wearable LL and assistive-robots are presented in a novel way, i.e., in form of a general control framework fitting in hierarchical layers. It shows the informatic interactions, between the user, the BCI operator, the shared controller, and the robot device with environment. The BCI operator is discussed in detail in the light of the feature extraction, classification and execution methods employed by all reviewed systems. Key features of present state-of-the-art EEG-based BCI applications and its interaction with the environment are presented and summarized in the form of a table. Proposed BCI control framework can cater similar systems based on fundamentally different classes. We expect a progress in the incorporation of the novel framework for the improvement of user-machine adaptation algorithms in a BCI.

The reviewed control schemes indicated that the MI/ME of LL tasks, as aspects of SMR-based BCI have not been extensively used compared to upper limbs (Tariq et al., 2017a,b2018). This is due to the small representation area of LL, in contrast to upper limbs, located inside the interhemispheric fissure of the sensorimotor cortex (Penfield and Boldrey, 1937). The review is an effort to progress the development of user’s mental task related to LL for BCI reliability and confidence measures.

Challenges presently faced by EEG-BCI controlled wearable and assistive technology, for seamless control in real-time, to regain natural gait cycle followed by a minimal probability of non-volitional commands, and possible future developments in these applications, are discussed in the last section.

General Control Framework for BCI Wearable Lower-Limb and Assistive-Robot Devices

In order to structure the control architecture adopted by various BCI wearable LL and assistive robot-devices, a general framework is presented in Figure 2. This framework was extended from Tucker et al. (2015) applicable to a range of EEG-BCI controlled devices for LL assistance, including portable exoskeletons, orthosis, prosthesis, and assistive-robots (wheelchairs, humanoids, and navigation/telepresence robots).

Figure 2 reflects the generalized control framework, where electrophysiological and transduced signal interactions, along the feedforward and feedback loops, are shown for motion intent recognition, during activity mode. Integral parts of the framework include a user of the assistive robot-device, the assistive-robot device itself, a BCI operator structure with sub-level controls, shared control, communication protocol and the interaction with environment. The BCI operator structure constitutes of three sub-layers which are the feature extraction, translation and execution layer, respectively. As a precaution to ensure human-robot interaction safety, safety layers are used with the user and the robotic device parts of the framework. The control framework is in a generalized form applicable to all brain-controlled assistive robots.

BCI control is driven from the recognition of user’s motion intentions; therefore we begin from the point of origin where motion intentions arise (cortical levels). The first step involves how to perceive and interpret the user’s physiological state (i.e., MI/ME or ERP) acquired via EEG. Following this, the status of physical interaction between the user and the environment (and vice versa), and the robotic device and the environment (and vice versa) are checked. The assistive-robot’s state is determined via electromechanical sensors. The user and assistive-robot status inputs to the BCI operator and shared controller, respectively.

Raw signals from the user and assistive LL device pass through the communication protocol which directs them to the connected client i.e., BCI operator via pre-processing and shared control module. Real-time signal acquisition and operating software could be used to assign event markers to the recorded data e.g., OpenViBE, BioSig, BCI++, BCI2000 etc. (Schalk et al., 2004Mellinger and Schalk, 2007Renard et al., 2010). The streaming connection can be made using TCP (when the time synchronization requirements do not need accuracy <100 ms) or LSL which incorporates built-in network and synchronization capabilities (with accuracy of 1 ms) recommended for applications based on ERPs.

Under the control framework components, BCI operator is the core part comprising of three sub layers, described in detail in section BCI Operator.

At feature extraction layer (intent recognition), user’s intent of activities related to LL movements are perceived, discerned and interpreted. Signal features associated to user’s kinesthetic intent/execution of motor task (in case of SMR) are encoded in form of feature vector (Lotte, 2014). The activity-mode recognition for ERP, against displayed oddball menu for specific location, uses frequency, or time domain features. It is the user’s direct volitional control that lets voluntarily manipulate the state of the device (e.g., joint position, speed, velocity and torque).

Translation layer (weighted class) takes account of the translation of extracted signal features to manipulate the robotic device, via machine understandable commands, which carry the user’s intent. This is done by supervised, or unsupervised learning (classification algorithm) which essentially estimates the weighted class, represented by the feature vector, and identifies the cognitive patterns for mapping to the desired state (unique command).

The desired state of user intent is carried to the execution layer (commands for device-specific control) where an error approximation is done with reference to current state. The state of the device is also sent to the execution layer via shared controller, as a feedforward control, in order to comply with the execution layer. The execution layer sends control commands to the actuator(s) of the device and visual feedback to the user via shared control unit in order to minimize the possible error. The feedback control plays a vital role in achieving the required output (usually accounts for the kinematic or kinetic properties of the robot-device).

This closes the overall control loop and the robotic device actuates to perform the required task(s). As the wearable assistive-robot is physically placed in close contact with the user, and that the powered device is likely to generate output force, safety mechanisms are kept into consideration with the user and hardware in the control framework. Inter-networking between subsystems of the generalized control architecture relies on the exchange of information sent at signal-level as well as physical-level.


Source:
Georgia Institute of Technology
Media Contacts:
John Toon – Georgia Institute of Technology
Image Source:
The image is credited to Woon-Hong Yeo.

Original Research: Closed access
“Fully portable and wireless universal brain-machine interfaces enabled by flexible scalp electronics and deep learning algorithm”. Musa Mahmood, et al.
Nature Machine Intelligence doi:10.1038/s42256-019-0091-7.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.