A self learning chip from Intel designed to work like the human brain has been announced.
His announcement is valuable on two levels—it presents information about the chip per se but he also shares insights into what Intel scientists are up to when they talk in terms of a chip that can mimic basic mechanics of the human brain.
That is interesting language, but what does it actually mean?
What does it do for us?
Mayberry drew some examples of the chip’s impact.
These would include complex decisions made faster and adapting over time; industrial problems solved using learned experiences; first responders to missing or abducted person reports using image-recognition applications and analyzing streetlight camera images. Traffic gridlock would change, as stoplights could automatically adjust timing to sync with the flow of traffic.
He said that “Our work in neuromorphic computing builds on decades of research and collaboration…combination of chip expertise, physics and biology yielded an environment for new ideas.”
Neuromorphic computing draws from what we understand about the brain’s architecture and its computations.
In turn, Loihi mimics how the brain functions; it uses data to learn and make inferences; it gets smarter over time and it does not need to be trained in the traditional way.
“Machine learning models such as deep learning have made tremendous recent advancements by using extensive training datasets to recognize objects and events. However, unless their training sets have specifically accounted for a particular element, situation or circumstance, these machine learning systems do not generalize well.”
He said, “Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.”
Mayberry wrote about asynchronous spiking.
“The brain’s neural networks relay information with pulses or spikes, modulate the synaptic strengths or weight of the interconnections based on timing of these spikes, and store these changes locally at the interconnections.”
The chip was reported as extremely energy-efficient. Mayberry said, “it is up to 1,000 times more energy-efficient than general purpose computing required for typical training systems.”
Introducing the Loihi test chip
The Loihi research test chip includes digital circuits that mimic the brain’s basic mechanics, making machine learning faster and more efficient while requiring lower compute power. Neuromorphic chip models draw inspiration from how neurons communicate and learn, using spikes and plastic synapses that can be modulated based on timing.
This could help computers self-organize and make decisions based on patterns and associations.
The Loihi test chip offers highly flexible on-chip learning and combines training and inference on a single chip.
This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud.
Researchers have demonstrated learning at a rate that is a 1 million times improvement compared with other typical spiking neural nets as measured by total operations to achieve a given accuracy when solving MNIST digit recognition problems.
Compared to technologies such as convolutional neural networks and deep learning neural networks, the Loihi test chip uses many fewer resources on the same task.
The self-learning capabilities prototyped by this test chip have enormous potential to improve automotive and industrial applications as well as personal robotics – any application that would benefit from autonomous operation and continuous learning in an unstructured environment. For example, recognizing the movement of a car or bike.
Further, it is up to 1,000 times more energy-efficient than general purpose computing required for typical training systems.
In the first half of 2018, the Loihi test chip will be shared with leading university and research institutions with a focus on advancing AI.
Additional Highlights
The Loihi test chip’s features include:
- Fully asynchronous neuromorphic many core mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies with each neuron capable of communicating with thousands of other neurons.
- Each neuromorphic core includes a learning engine that can be programmed to adapt network parameters during operation, supporting supervised, unsupervised, reinforcement and other learning paradigms.
- Fabrication on Intel’s 14 nm process technology.
- A total of 130,000 neurons and 130 million synapses.
- Development and testing of several algorithms with high algorithmic efficiency for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.
What’s next?
Spurred by advances in computing and algorithmic innovation, the transformative power of AI is expected to impact society on a spectacular scale.
Today, Intel are applying our strength in driving Moore’s Law and manufacturing leadership to bring to market a broad range of products — Intel® Xeon® processors, Intel® Nervana™ technology, Intel Movidius™ technology and Intel FPGAs — that address the unique requirements of AI workloads from the edge to the data center and cloud.
Both general purpose compute and custom hardware and software come into play at all scales.
The Intel® Xeon Phi™ processor, widely used in scientific computing, has generated some of the world’s biggest models to interpret large-scale scientific problems, and the Movidius Neural Compute Stick is an example of a 1-watt deployment of previously trained models.
As AI workloads grow more diverse and complex, they will test the limits of today’s dominant compute architectures and precipitate new disruptive approaches.
Looking to the future, Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works.
Intel hope that you will follow the exciting milestones coming from Intel Labs in the next few months as we bring concepts like neuromorphic computing to the mainstream in order to support the world’s economy for the next 50 years.
In a future with neuromorphic computing, all of what you can imagine – and more – moves from possibility to reality, as the flow of intelligence and decision-making becomes more fluid and accelerated.
Intel’s vision for developing innovative compute architectures remains steadfast, and we know what the future of compute looks like because we are building it today.