A neuromorphic computer that can simulate 8 million neurons is in the news.
The term “neuromorphic” suggests a design that can mimic the human brain.
And neuromorphic computing?
It is described as using very large scale integration systems with electric analog circuits imitating neuro-biological architectures in our system.
This is where Intel steps in, and significantly so.
The Loihi chip applies the principles found in biological brains to computer architectures.
The payoff for users is that they can process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications, e.g., sparse coding, graph search and constraint-satisfaction problems.
Its news release on Monday read “Intel’s Pohoiki Beach, a 64-Chip Neuromorphic System, Delivers Breakthrough Results in Research Tests.” Pohoiki Beach is Intel’s latest neuromorphic system.
Intel is celebrating that an 8 million-neuron neuromorphic system comprising 64 Loihi research chips – codenamed Pohoiki Beach – is now available to the broader research community.
Like IBM’s TrueNorth processor and other versions of neuromorphic hardware, Loihi relies on spiking neurons networks to emulate the way biological brains process information.
One of the big advantages of this type of computing model is the low amount of power drawn by these chips compared to conventional von Neuman processors.
Potentially, that could bring the technology into areas like IoT, mobile devices, and edge computing, where high-powered AI hardware is currently lacking.
According to Intel, Pohoiki Beach is aimed at some of the toughest challenges in computer science, such as sparse coding, simultaneous localization and mapping (SLAM), and path planning.
These types of applications can be performed on conventional computers, but they are not very efficient at it, at least on a performance-per-watt basis.
For example, SLAM applications are generally used in power-constrained systems like autonomous vehicles, robotics, and virtual reality devices.
To be able to handle this type of workload efficiently in such environments, especially for real-time applications, requires something akin to a brain-like inference capability.
In that regard, experience with Intel’s neuromorphic technology looks promising.
According to Konstantinos Michmizos of Rutgers University, his team was able to construct a highly energy-efficient SLAM solution based on the Loihi technology.
“We benchmarked the Loihi-run network and found it to be equally accurate while consuming 100 times less energy than a widely used CPU-run SLAM method for mobile robots,” said Michmizos.
In a similar vein, Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo, said that researchers were able to demonstrate 109 times lower power consumption than a GPU running a real-time deep learning benchmark and five times lower power consumption than specialized IoT inference hardware.
“Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time,” he said.
The news means intel is providing greater computational scale and capacity to Intel’s research partners.
That is much part of the reason why this is a big deal – Pohoiki Beach will now be available to what Intel reports as “60 ecosystem partners.”
They are going to use the system for projects that involve complex compute problems that are compute-intensive.
IEEE Spectrum spelled out the advantage clearly.
“Researchers can use the 64-chip Pohoiki Beach system to make systems [the Pohoiki Beach system being made up of multiple Nahuku boards and containing 64 Loihi chips] that learn and see the world more like humans.”
Rich Uhlig, managing director of Intel Labs, said they were impressed with their early results “as we scale Loihi to create more powerful neuromorphic systems.”
Who are some of these “ecosystem partners”?
For one, Telluride Neuromorphic Cognition Engineering Workshop, a three-week event that ends July 19, in which Intel is a platinum sponsor, puzzles out adaptation capabilities to a prosthetic leg, object tracking using emerging event-based cameras, and inferring tactile input to the electronic skin of an iCub robot.
Kyle Wiggers in VentureBeat drilled down to some technical details surrounding Loihi: its development toolchain “comprises the Loihi Python API, a compiler, and a set of runtime libraries for building and executing SNNs on Loihi.
It provides a way to create a graph of neurons and synapses with custom configurations, such as decay time, synaptic weight, and spiking thresholds, and a means of simulating those graphs by injecting external spikes through custom learning rules.”
All in all, Intel’s work on a neuromorphic system could influence a next generation of AI.
Long and short, don’t waste time and energy dwelling only on conventional computer logic.
Bring it on for labs research bringing us closer to human-like cognition.
“A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation.
This is critical to overcoming the so-called ‘brittleness’ of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding.”
Intel Labs stated it is “driving computer-science research that contributes to this third generation of AI.
Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which creates algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world.”
In 2017, Intel introduced Loihi as “its first neuromorphic research chip.”
A year later, Intel was building out a research community to further the development of neuromorphic algorithms, software and applications.
Wait, what’s wrong with trained neural networks? Since when are they not doing their job?
Senior Editor Samuel Moore in IEEE Spectrum: Today’s neural networks suffer from catastrophic forgetting.
“If you tried to teach a trained neural network to recognize something new – a new road sign, say – by simply exposing the network to the new input, it would disrupt the network so badly that it would become terrible at recognizing anything.”
Moore added that “Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do.”