Straight from the future – Neuroscientists are teaching computers to read words straight out of people’s brains

0
1339
The Blue Concept of Active Human Brain with Binary Code Stream. Abstract Human Brain with Binary Digits. Vector Illustration.

In recent years, a fast growing understanding of how our nervous system works has enabled a fusion between man and machine, once only envisioned in science fiction, to become a reality.

Bionic limbs have been built into amputees, scientists are beginning to restore a sense of touch to these patients, and we are on our way to restoring vision in the blind.

Deciphering the function of individual neuronsrons

With over 100 billion neurons, understanding the human brain and how our experiences are represented by neurons is, to say the least, no easy feat.

Although we have an understanding of the major brain structures and their functions, individual neural connections greatly differ from person to person, being altered by activity and learning – which makes the idea of mind reading seem even more difficult to achieve.

However, a combination of functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and lesion studies have helped in giving us a better idea of what some of our experiences look like in the brain – even at the level of individual cells.

Telepathy through brain scans

In 2006, Adrian Owen and his colleagues at Cambridge University discovered that people in a vegetative state can respond to questions when under MRI.

Observing the changing patterns of brain activity in vegetative patients enabled the researchers to glean responses to these questions.

This is fascinating, as asking patients whether they feel pain could be beneficial in dosing pain killers, and opens up the possibility of involving them in decisions regarding their care.

Research of this kind could lead to the development of devices that enable brain-damaged patients to interact with the world.

This scan depicts patterns of the vegetative patient's electrical activity over the head when they attended to the designated words, and when they were distracted by novel but irrelevant words.
 This scan depicts patterns of the vegetative patient’s electrical activity over the head when they attended to the designated words, and when they were distracted by novel but irrelevant words.http://www.cam.ac.uk/research/news/patient-in-vegetative-state-not-just-aware-but-paying-attention Photograph: Clinical Neurosciences/University of Cambridge

Using brain imaging, scientists have built a map displaying how words and their meanings are represented across different regions of the brain

One person’s right cerebral hemisphere. The overlaid words, when heard in context, are predicted to evoke strong responses near the corresponding location. Green words are mostly visual and tactile, red words are mostly social.

One person’s right cerebral hemisphere. The overlaid words, when heard in context, are predicted to evoke strong responses near the corresponding location. Green words are mostly visual and tactile, red words are mostly social. Illustration: Copyright Alexander Huth / The Regents of the University of California

Scientists have created an “atlas of the brain” that reveals how the meanings of words are arranged across different regions of the organ.

Like a colourful quilt laid over the cortex, the atlas displays in rainbow hues how individual words and the concepts they convey can be grouped together in clumps of white matter.

“Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words,” said Jack Gallant, a neuroscientist at the University of California, Berkeley.

No single brain region holds one word or concept.

A single brain spot is associated with a number of related words.

And each single word lights up many different brain spots.

Together they make up networks that represent the meanings of each word we use: life and love; death and taxes; clouds, Florida and bra. All light up their own networks.


Scientists have created an interactive map showing which brain areas respond to hearing different words.

This week on three papers posted to the preprint server bioRxiv in which three different teams of researchers demonstrated that they could decode speech from recordings of neurons firing.

In each study, electrodes placed directly on the brain recorded neural activity while brain-surgery patients listened to speech or read words out loud.

Then, researchers tried to figure out what the patients were hearing or saying. In each case, researchers were able to convert the brain’s electrical activity into at least somewhat-intelligible sound files.

The first paper, posted to bioRxiv on Oct. 10, 2018, describes an experiment in which researchers played recordings of speech to patients with epilepsy who were in the middle of brain surgery.

(The neural recordings taken in the experiment had to be very detailed to be interpreted.

And that level of detail is available only during the rare circumstances when a brain is exposed to the air and electrodes are placed on it directly, such as in brain surgery.)

As the patients listened to the sound files, the researchers recorded neurons firing in the parts of the patients’ brains that process sound.

The scientists tried a number of different methods for turning that neuronal firing data into speech and found that “deep learning” — in which a computer tries to solve a problem more or less unsupervised — worked best.

When they played the results through a vocoder, which synthesizes human voices, for a group of 11 listeners, those individuals were able to correctly interpret the words 75 percent of the time.

You can listen to audio from this experiment here.

The second paper, posted Nov. 27, 2018, relied on neural recordings from people undergoing surgery to remove brain tumors.

As the patients read single-syllable words out loud, the researchers recorded both the sounds coming out of the participants’ mouths and the neurons firing in the speech-producing regions of their brains.

Instead of training computers deeply on each patient, these researchers taught an artificial neural network to convert the neural recordings into audio, showing that the results were at least reasonably intelligible and similar to the recordings made by the microphones. (The audio from this experiment is here but has to be downloaded as a zip file.)

The third paper, posted Aug. 9, 2018, relied on recording the part of the brain that converts specific words that a person decides to speak into muscle movements.

While no recording from this experiment is available online, the researchers reported that they were able to reconstruct entire sentences (also recorded during brain surgery on patients with epilepsy) and that people who listened to the sentences were able to correctly interpret them on a multiple choice test (out of 10 choices) 83 percent of the time.

That experiment’s method relied on identifying the patterns involved in producing individual syllables, rather than whole words.

The goal in all of these experiments is to one day make it possible for people who’ve lost the ability to speak (due to amyotrophic lateral sclerosis or similar conditions) to speak through a computer-to-brain interface. However, the science for that application isn’t there yet.

Interpreting the neural patterns of a person just imagining speech is more complicated than interpreting the patterns of someone listening to or producing speech, Science reported. (However, the authors of the second paper said that interpreting the brain activity of someone imagining speech may be possible.)

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.