No one can say whether androids will dream of electric sheep, but they will almost certainly need periods of rest that offer benefits similar to those that sleep provides to living brains, according to new research from Los Alamos National Laboratory.
“We study spiking neural networks, which are systems that learn much as living brains do,” said Los Alamos National Laboratory computer scientist Yijing Watkins.
“We were fascinated by the prospect of training a neuromorphic processor in a manner analogous to how humans and other biological systems learn from their environment during childhood development.”
Watkins and her research team found that the network simulations became unstable after continuous periods of unsupervised learning. When they exposed the networks to states that are analogous to the waves that living brains experience during sleep, stability was restored. “It was as though we were giving the neural networks the equivalent of a good night’s rest,” said Watkins.
The discovery came about as the research team worked to develop neural networks that closely approximate how humans and other biological systems learn to see. The group initially struggled with stabilizing simulated neural networks undergoing unsupervised dictionary training, which involves classifying objects without having prior examples to compare them to.
“The issue of how to keep learning systems from becoming unstable really only arises when attempting to utilize biologically realistic, spiking neuromorphic processors or when trying to understand biology itself,” said Los Alamos computer scientist and study coauthor Garrett Kenyon.
“The vast majority of machine learning, deep learning, and AI researchers never encounter this issue because in the very artificial systems they study they have the luxury of performing global mathematical operations that have the effect of regulating the overall dynamical gain of the system.”
The researchers characterize the decision to expose the networks to an artificial analog of sleep as nearly a last ditch effort to stabilize them. They experimented with various types of noise, roughly comparable to the static you might encounter between stations while tuning a radio.
The best results came when they used waves of so-called Gaussian noise, which includes a wide range of frequencies and amplitudes. They hypothesize that the noise mimics the input received by biological neurons during slow-wave sleep.
The results suggest that slow-wave sleep may act, in part, to ensure that cortical neurons maintain their stability and do not hallucinate.
The groups’ next goal is to implement their algorithm on Intel’s Loihi neuromorphic chip. They hope allowing Loihi to sleep from time to time will enable it to stably process information from a silicon retina camera in real time.
If the findings confirm the need for sleep in artificial brains, we can probably expect the same to be true of androids and other intelligent machines that may come about in the future.
Watkins will be presenting the research at the Women in Computer Vision Workshop on June 14 in Seattle.
Building a vast digital simulation of the brain could transform neuroscience and medicine and reveal new ways of making more powerful computers” (Markram et al., 2011). The human brain is by far the most computationally complex, efficient, and robust computing system operating under low-power and small-size constraints.
It utilizes over 100 billion neurons and 100 trillion synapses for achieving these specifications. Even the existing supercomputing platforms are unable to demonstrate full cortex simulation in real-time with the complex detailed neuron models.
For example, for mouse-scale (2.5 × 106 neurons) cortical simulations, a personal computer uses 40,000 times more power but runs 9,000 times slower than a mouse brain (Eliasmith et al., 2012).
The simulation of a human-scale cortical model (2 × 1010 neurons), which is the goal of the Human Brain Project, is projected to require an exascale supercomputer (1018 flops) and as much power as a quarter-million households (0.5 GW).
The electronics industry is seeking solutions that will enable computers to handle the enormous increase in data processing requirements. Neuromorphic computing is an alternative solution that is inspired by the computational capabilities of the brain.
The observation that the brain operates on analog principles of the physics of neural computation that are fundamentally different from digital principles in traditional computing has initiated investigations in the field of neuromorphic engineering (NE) (Mead, 1989a).
Silicon neurons are hybrid analog/digital very-large-scale integrated (VLSI) circuits that emulate the electrophysiological behavior of real neurons and synapses. Neural networks using silicon neurons can be emulated directly in hardware rather than being limited to simulations on a general-purpose computer.
Such hardware emulations are much more energy efficient than computer simulations, and thus suitable for real-time, large-scale neural emulations. The hardware emulations operate in real-time, and the speed of the network can be independent of the number of neurons or their coupling.
There has been growing interest in neuromorphic processors to perform real-time pattern recognition tasks, such as object recognition and classification, owing to the low energy and silicon area requirements of these systems (Thakur et al., 2017; Wang et al., 2017).
These large systems will find application in the next generation of technologies including autonomous cars, drones, and brain-machine interfaces. The neuromorphic chip market is expected to grow exponentially owing to an increasing demand for artificial intelligence and machine learning systems and the need for better-performing ICs and new ways of computation as Moore’s law is pushed to its limit (MarketsandMarkets, 2017).
The biological brains of cognitively sophisticated species have evolved to organize their neural sensory information processing with computing machinery that are highly parallel and redundant, yielding great precision and efficiency in pattern recognition and association, despite operating with intrinsically sluggish, noisy, and unreliable individual neural and synaptic components.
Brain-inspired neuromorphic processors show great potential for building compact natural signal processing systems, pattern recognition engines, and real-time autonomous agents (Chicca et al., 2014; Merolla et al., 2014; Qiao et al., 2015).
Profiting from their massively parallel computing substrate (Qiao et al., 2015) and co-localized memory and computation features, these hardware devices have the potential to solve the von Neumann memory bottleneck problem (Indiveri and Liu, 2015) and to reduce power consumption by several orders of magnitude.
Compared to pure digital solutions, mixed-signal neuromorphic processors offer additional advantages in terms of lower silicon area usage, lower power consumption, reduced bandwidth requirements, and additional computational complexity.
Several neuromorphic systems are already being used commercially. For example, Synaptics Inc. develops touchpad and biometric technologies for portable devices, Foveon Inc. develops Complementary Metal Oxide-Semiconductor (CMOS) color imagers (Reiss, 2004), and Chronocam Inc. builds asynchronous time-based image sensors based on the work in Posch et al. (2011).
Another product, an artificial retina, is being used in the Logitech Marble trackball, which optically measures the rotation of a ball to move the cursor on a computer screen (Arreguit et al., 1996). The dynamic vision sensor (DVS) by iniLabs Ltd. is another successful neuromorphic product (Lichtsteiner et al., 2008). Table Table11 provides a detailed timeline, with major breakthroughs in the field of large-scale brain simulations and neuromorphic hardware.
Table 1
Timeline of neuromorphic simulation and hardware.
References | Contributions |
---|---|
Mead, 1989a | Initiated the field of neuromorphic engineering |
Mead, 1989b | Adaptive Retina: among the first biologically inspired silicon retina chip |
Mahowald and Douglas, 1991 | Silicon Neuron: neuron using subthreshold aVLSI circuitry |
Prange and Klar, 1993 | BIONIC: an emulator with simulation of 16 neurons with 16 synapses |
Yasunaga et al., 1990 | LSI composed on 576 digital neurons |
Wolpert and Micheli-Tzanakou, 1996 | Modeling nerve networks based on the I&F model in silicon |
Jahnke et al., 1996 | NESPINN: SIMD/dataflow architecture for a neuro-computer |
Schoenauer et al., 1999 | MASPINN: a neuro-accelerator for spiking neural networks |
Wolff et al., 1999 | ParSPIKE: a DSP accelerator simulating large spiking neural networks |
Schoenauer et al., 2002 | NeuroPipe-Chip: a digital neuro-processor for spiking neural networks |
Furber et al., 2006 | High-performance computing for systems of spiking neurons |
Markram, 2006 | Blue Brain Project: large-scale simulation of the brain at cellular level |
Boahen, 2006 | Neurogrid: emulating a million neurons in the cortex |
Koickal et al., 2007 | aVLSI adaptive neuromorphic olfaction chip |
Djurfeldt et al., 2008 | Brain-scale computer simulation of the neocortex on the IBM Blue Gene |
Maguire et al., 2007 | BenNuey: platform comprises up to 18 M neurons and 18 M synapses |
Izhikevich and Edelman, 2008 | Simulation of thalamocortical with 1011 neuron and 1015 synapses |
Ananthanarayanan et al., 2009 | Cortical simulations with 109 neurons, 1013 synapses |
Serrano-Gotarredona et al., 2009 | CAVIAR: a 45 k neuron 5 M synapse 12 G connects/s AER hardware |
Schemmel et al., 2008, 2010 | BrainScales: a wafer-scale neuromorphic hardware |
Seo et al., 2011 | CMOS neuromorphic chip with 256 neurons and 64 K synapses |
Cassidy et al., 2011 | EU SCANDLE: one-million-neuron, single FPGA neuromorphic system |
Moore et al., 2012 | Bluehive project: simulation with 256 k neurons and 256 M synapses |
Zamarreno-Ramos et al., 2013 | AER system with 64 processors, 262 k neurons, and 32 M synapses |
Furber et al., 2014 | SpiNNaker: Digital neuromorphic chip with multicore System-on-Chip |
Merolla et al., 2014 | TrueNorth: IBM introduces the TrueNorth “neurosynaptic chip” |
Benjamin et al., 2014 | Neurogrid: A mixed-analog-digital large-scale neuromorphic simulator. |
Park et al., 2014 | IFAT: neuromorphic processor with 65 k-neuron I&F array transceiver |
Wang et al., 2014 | An FPGA framework simulating 1.5 million LIF neurons in real time |
Qiao et al., 2015 | A spiking neuromorphic processor with 256 neurons and 128 K synapses |
Park et al., 2017 | HiFAT-IFAT: reconfigurable large-scale neuromorphic systems |
Cheung et al., 2016 | Neuromorphic processor capable of simulating 400 k neurons in real-time |
Pani et al., 2017 | An FPGA platform with up to 1,440 Izhikevich neurons |
Moradi et al., 2018 | DYNAP-SEL: neuromorphic mixed-signal processor with self-learning |
Davies et al., 2018 | Loihi: Intel neuromorphic chip with “self-learning chip” |
Wang and van Schaik, 2018; Wang et al., 2018 | DeepSouth: cortex simulator up to 2.6 billion LIF neurons |
Provided by Los Alamos National Laboratory
[…] Artificial brains need sleep to maintain their stability […]