Researchers have created a 3-D computing circuit that could be used to map and implement complex machine learning algorithms


Researchers at the University of Massachusetts and the Air Force Research Laboratory Information Directorate have recently created a 3-D computing circuit that could be used to map and implement complex machine learning algorithms, such convolutional neural networks (CNNs).

This 3-D circuit, presented in a paper published in Nature Electronics, comprises eight layers of memristors; electrical components that regulate the electrical current flowing in a circuit and directly implement neural network weights in hardware.

“Previously, we developed a very reliable memristive device that meets most requirements of in-memory computing for artificial neural networks, integrated the devices into large 2-D arrays and demonstrated a wide variety of machine intelligence applications,” Prof. Qiangfei Xia, one of the researchers who carried out the study, told TechXplore.

“In our recent study, we decided to extend it to the third dimension, exploring the benefit of a rich connectivity in a 3-D neural network.”

Essentially, Prof. Xia and his team were able to experimentally demonstrate a 3-D computing circuit with eight memristor layers, which can all be engaged in computing processes.

Their circuit differs greatly from other previously developed 3-D circuits, such as 3-D NAND flash, as these systems are usually comprised of layers with different functions (e.g. a sensor layer, a computing layer, a control layer, etc.) stacked or bonded together.

“One of the main challenges previously encountered when trying to build a multi-layer computing circuit is that there haven’t been any devices other than memristors that are stackable and yet maintain all the required performance for computing,” Peng Lin, one of the researchers who carried out the study.

“For example, silicon-based CMOS technology is the basic building block of the mainstream computing chips, but it is known to rely on non-stackable high-quality single crystal silicon layer and thus hard to be used for 3-D circuits.”

While memristors are excellent stacking devices, researchers have so far been unable to realize 3-D circuits with several stacked memristor layers for large-scale computing applications.

A 3-D memristor-based circuit for brain-inspired computing
An image of the 3D circuit created by the researchers, captured using a scanning electron microscope (SEM). Credit: Lin et al.

In fact, building such a circuit requires highly sophisticated processes, as well as the use of techniques that can overcome challenges typically encountered when performing large-scale array operations.

“One of the prominent issues with creating a memristor array is the cell-to-cell interference, so called ‘the sneak path problem,” which originates from the passive connections between each resistor-like memristor element,” Lin said.

“A 2-D memristor array can mitigate this problem by incorporating a transistor as a selecting device, but this solution cannot be applied in 3-D. As a result, the existing 3-D memristor designs, which are based on fully connected topologies, would suffer from the increasing leakages when scaled up to a large 3-D network.”

To overcome the challenges previously faced when trying to develop memristor-based 3-D circuits for large-scale computing, the researchers designed a circuit with a unique topology (i.e., arrangement of individual parts).

In their circuit, memristors are linked through ‘local connections.” This means that each individual memristor only shares electrodes with a small number of devices in its vicinity. This unique design strategy leads to the suppression of most sneak paths, ultimately enabling large-scale array operations.

The unique way in which memristors are arranged and connected in the circuit devised by Prof. Xia, Lin and their colleagues also makes it ideal for implementing advanced computational techniques, such as artificial neural networks (ANNs).

Past studies suggest that memristor based systems can host ANNs directly, with data flowing in their circuits and enabling the implementation of the networks’ forward/backward propagation strategies. However, almost all existing memristor arrays are arranged in regular-shaped, crossbar structures, which do not reflect the structure of ANNs.

Credit: Lin et al.

“The fully connected topology of almost all existing memristor devices does not match the complex topologies of modern neural networks, such as CNNs, the most prominent computational techniques currently used for computer vision applications,” Lin explained. “As a result, efficient implementation of a convolutional neural network in a memristor system becomes extremely challenging.”

In contrast with previously developed circuits, the locally connected topology of the 3-D circuit designed by Prof. Xia, Lin, and their colleagues naturally matches that of CNNs, as the latter includes local connections between neurons, known as ‘local receptive fields.” This makes the circuit ideal for directly implementing complex neural networks.

“We are very proud of the successful demonstration of our 3-D memristor array with record-high eight memristor layers,” Lin said. “Although a 3-D memristor circuit of such scale and with a similar amount of stacked layers had previously been envisioned, there has not been clear evidence that such a 3-D circuit can be really built and be fully operational. Our work firmly demonstrates the capabilities of a memristor based system.”

So far, Prof. Xia, Lin and their colleagues have evaluated the effectiveness of their 3-D circuit by programming parallelly operated kernels into the memristor-array and then using the circuit to implement a CNN for pattern recognition. When running on the 3-D circuit, the CNN was found to recognize handwritten digits with a remarkable accuracy of 98%.

The researchers also successfully used the 3-D circuit to implement a technique for detecting the edges of moving objects in videos. To achieve this, they applied filters to their system’s structure, which allowed it to process different pixels simultaneously.

“We think that the unique topological design proposed in our work can open up great opportunities for designing a neuromorphic computing hardware,” Lin said. “Conventional array designs are inefficient at hosting modern neural networks because of the mismatch between the full connectivity of electrodes and the more complex connections found in neural networks.

This limitation may not be the prominent issue at this initial stage of neuromorphic computing research, but it could eventually be the bottleneck to reproducing human brain-like intelligence in machines.”

Prof. Xia, Lin and their colleagues were the first to build a memristor-based 3-D computing hardware with a much higher density of computing units and advanced capabilities. Their findings suggest that such a 3-D circuit could help to mitigate the ever-growing challenges associated with scaling up modern computing devices or with using existing hardware to run advanced machine learning techniques.

The researchers hope that their study will inspire additional investigations into the benefits of 3-D memristor-based circuits, ultimately prompting a shift towards design strategies that prioritize both the connectivity and functionality of computing arrays.

“We now plan to integrate the 3-D neural networks with sensor arrays so that the input fed to the neural network can be 2-D matrices, rather than 1D vectors as for most neural networks nowadays,” Prof. Xia said. “We will also process analog information directly in the integrated circuits. Research in these directions will greatly increase the throughput and power efficiency of the circuit’s information processing.”

ARTIFICIAL neural networks have been exploited to solve many problems in the area of pattern recognition, exhibiting the potential to provide high speed computation. One possible device to achieve high speed computation is memristors, the discovery of which greatly broadened the area of hybrid CMOS architectures to nonconventional logic [1] such as threshold logic [2] and neuromorphic computing [3].

Memristors were theoretically postulated by Chua in 1971 [4] and later Williams’s team presented a resistance variable device as a memristor at HP Labs in 2008 [5]. As a novel nanoscale device, memristors provide several advantageous features such as non-volatility, high density, low power, and good scalability [6].

Memristors are particularly appealing for realizing synaptic weights in artificial neural networks [7], [8] as the innate property of a reconfigurable resistance with memory makes memristor highly suitable for synapse weight refinement.

Neuron   circuits   were    originally    developed    in CMOS [9], [10]. Later, hybrid CMOS-memristor synaptic circuits were developed [11]–[13]. The area and power consumption of transistors are however much greater than memristors. A memristor bridge synapse-based neural network and learning are proposed in [14]–[16], which implement multilayer neural networks (MNN) trained by a back propagation (BP) algorithm, and the synapse weight updates are performed by a host computer.

The major computational bottleneck is however the learning process itself which could not be completely implemented in hardware with massive memristor-based crossbar arrays.

Many previous memristor-based learning rules have focused on Spike-Timing Dependent Plasticity (STDP) [17]. For example, neuromorphic character recognition system with two PCMO memristors (2M) as a synapse was presented in [18], and a learning rule proposed in [19] for visual pattern recog- nition with a CMOS neuron.

The filamentary switching binary 2M synapse was used for speech recognition [20]. The convergence of STDP-based learning is however not guaranteed for general inputs [21].

New methods have since been proposed for memristor- based neuromorphic architectures. For example, brain-state- in-a-box (BSB) neural networks are presented in [22], which also use 2M crossbar arrays to represent, respectively, plus- polarity and minus-polarity connection matrices.

Memristor- based multilayer neural networks with online gradient descent training are proposed in [23] and [24], which use a single memristor and two  CMOS  transistors  (2T1M) per synapse.

A training method of a 2M hardware neuromorphic network is proposed in [25]. To reduce the circuit size, fewer memristors and transistors are desired. A memristor-based crossbar array architecture is therefore presented in [26], where both plus-polarity and minus-polarity connection matrices are realized by a single crossbar array and a simple constant-term circuit, thereby reducing the physical size and power dissipation.

The memristor-based neural network in [26] is however limited to a single layer neural network (SNN). On-chip learning methods remain a challenge in most memristor-based neural networks. Neuromorphic processors with memristor-based synapses are investigated in [27]–[29] to achieve the digital pattern recognition.

Training algorithms of 2M crossbar neuromorphic processors is proposed in [30] and [31], which could be used in MNN; however, two memristors per synapse are required. An on-chip supervised learning rule for an ultra-high density neural crossbar using a memristor for the synapse and neuron is described in [32] to perform XOR and AND logic operations. Realizing the BP algorithm on a 1M crossbar array remains an issue.

The primary contributions of this paper are:

  1. A memristor-based AND (MRL)  gate  [33]  is  utilized as a memristor-based switch (MS) [2] in updating the synaptic crossbar circuits. A memristive model for synap- tic circuits based on experimental  data  is  utilized  in  the simulations. Formulae for determining the relevant time for the weight updating process are also available. Moreover, an amplifier is added to generate the errors, creating an opportunity for updating the synaptic weights on-chip.
  2. The memristor-based SNN in [26] is expanded to MNN and provides enhanced robustness despite the memris- tance variations. The proposed memristor-based synaptic crossbar circuit uses fewer memristors and no transistors as compared with the synaptic circuits discribed in [11], [12], [14]-[16], [18]-[20], [22], [23], [25], [30], and [31].
  3. An adaptive BP algorithm suitable for the proposed memristor-based MNN is developed to train neural net- works and perform the XOR function and character recognition. Moreover, the weight adjustment process and the proposed MNNs exhibit higher recognition rates and require fewer cycles.


A single memristor-based synaptic architecture for mul- tilayer neural networks with on-chip learning is proposed. Moreover, an adaptive BP algorithm suitable for the pro- posed memristor-based multilayer neural network is applied  to train neural networks and perform the XOR function and character recognition.

A simple, compact, and reliable neural network can be  used for applications in pattern recognition  by combining the advantages of the memristor-based synaptic architecture with the proposed BP weight change algorithm.

The advantages of the proposed architecture are verified through simulations, demonstrating that the proposed adaptive BP algorithm exhibits higher recognition rates and requires fewer cycles.

[1] Y. Zhang, Y. Shen, X. Wang, and Y. Guo, “A novel design for memristor- based OR gate,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 62, no. 8, pp. 781–785, Aug. 2015.
[2] Y. Zhang, Y. Shen, X. Wang, and L. Cao, “A novel design for memristor- based logic switch and crossbar circuits,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 62, no. 5, pp. 1402–1411, May 2015.

[3] G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis, and
T. Prodromakis, “Integration of nanoscale memristor synapses in neu- romorphic computing architectures,” Nanotechnol., vol. 24, no. 38, p. 384010, Sep. 2013.
[4] L. Chua, “Memristor-The missing circuit element,” IEEE Trans. Circuit Theory, vol. 18, no. 5, pp. 507–519, Sep. 1971.
[5] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, pp. 80–83, May 2008.
[6] P. Junsangsri and F. Lombardi, “Design of a hybrid memory cell using memristance and ambipolarity,” IEEE Trans. Nanotechnol., vol. 12, no. 1, pp. 71–80, Jan. 2013.
[7] Y. Zhang, Y. Li, X. Wang, and E. G. Friedman, “Synaptic characteristics of Ag/AgInSbTe/Ta-based memristor for pattern recognition applica- tions,” IEEE Trans. Electron Devices, vol. 64, no. 4, pp. 1806–1811, Apr. 2017.
[8] L. Wang, Y. Shen, Q. Yin, and G. Zhang, “Adaptive synchronization of memristor-based neural networks with time-varying delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 9, pp. 2033–2042, Sep. 2015.
[9] M. Walker, P. Hasler, and L. A. Akers, “A CMOS neural network for pattern association,” IEEE Micro, vol. 9, no. 5, pp. 68–74, Oct. 1989.
[10] B. V. Benjamin et al., “Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations,” Proc. IEEE, vol. 102, no. 5, pp. 699–716, May 2014.
[11] K. D. Cantley, A. Subramaniam, H. J. Stiegler, R. A. Chapman, and
E. M. Vogel, “Neural learning circuits utilizing nano-crystalline silicon transistors and memristors,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 4, pp. 565–573, Jun. 2012.
[12] H. Manem, J. Rajendran, and G. S. Rose, “Stochastic gradient descent inspired training technique for a CMOS/nano memristive trainable threshold gate array,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 59, no. 5, pp. 1051–1060, May 2012.
[13] Z. Wang, W. Zhao, W. Kang, Y. Zhang, J.-O. Klein, and C. Chappert, “Ferroelectric tunnel memristor-based neuromorphic network with 1T1R crossbar architecture,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2014, pp. 29–34.
[14] S. P. Adhikari, H. Kim, R. K. Budhathoki, C. Yang, and L. O. Chua, “A circuit-based learning architecture for multilayer neural networks with memristor bridge synapses,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 62, no. 1, pp. 215–223, Jan. 2015.
[15] S. P. Adhikari, C. Yang, H. Kim, and L. O. Chua, “Memristor bridge synapse-based neural network and its learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 9, pp. 1426–1435, Sep. 2012.
[16] H. Kim, M. P. Sah, C. Yang, T. Roska, and L. O. Chua, “Neural synaptic weighting with a pulse-based memristor circuit,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 59, no. 1, pp. 148–158, Jan. 2012.
[17] C. Zamarreño-Ramos, L. A. Camuñas-Mesa, J. A. Pérez-Carrasco,
T. Masquelier, T. Serrano-Gotarredona, and B. Linares-Barranco, “On spike-timing-dependent-plasticity, memristive devices, and build- ing a self-learning visual cortex,” Frontiers Neurosci., vol. 5, p. 26, Mar. 2011.
[18] A. M. Sheri, H. Hwang, M. Jeon, and B.-G. Lee, “Neuromorphic character recognition system with two PCMO memristors as a synapse,” IEEE Trans. Ind. Electron., vol. 61, no. 6, pp. 2933–2941, Jun. 2014.
[19] M. Chu et al., “Neuromorphic hardware system for visual pattern recognition with memristor array and CMOS neuron,” IEEE Trans. Ind. Electron., vol. 62, no. 4, pp. 2410–2419, Apr. 2015.
[20] S. N. Truong, S.-J. Ham, and K.-S. Min, “Neuromorphic crossbar circuit with nanoscale filamentary-switching binary memristors for speech recognition,” Nanoscale Res. Lett., vol. 9, no. 1, pp. 1–9, Nov. 2014.
[21] R. Legenstein, C. Naeger, and W. Maass, “What can a neuron learn with spike-timing-dependent plasticity?” Neural Comput., vol. 17, no. 11, pp. 2337–2382, 2005.
[22] M. Hu, H. Li, Y. Chen, Q. Wu, G. S. Rose, and R. W. Linderman, “Mem- ristor crossbar-based neuromorphic computing system: A case study,” IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 10, pp. 1864–1878, Oct. 2014.
[23] D. Soudry, D. Di Castro, A. Gal, A. Kolodny, and S. Kvatinsky, “Memristor-based multilayer neural networks with online gradient descent training,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 10, pp. 2408–2421, Oct. 2015.
[24] E. Rosenthal, S. Greshnikov, D. Soudry, and S. Kvatinsky, “A fully analog memristor-based neural network with online gradient training,” in Proc. IEEE Int. Symp. Circuits Syst. (ISCAS), 2016, pp. 1394–1397.

[25] M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam,
K. K. Likharev, and D. B. Strukov, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors,” Nature, vol. 521, pp. 61–64, May 2015.
[26] S. N. Truong and K.-S. Min, “New memristor-based crossbar array architecture with 50-% area reduction and 48-% power saving for matrix- vector multiplication of analog neuromorphic computing,” J. Semicond. Tech. Sci., vol. 14, no. 3, pp. 356–363, Jun. 2014.
[27] Q. Wang, Y. Kim, and P. Li, “Architectural design exploration for neuromorphic processors with memristive synapses,” in Proc. 14th Int. Conf. Nanotechnol. (IEEE-NANO), Aug. 2014, pp. 962–966.
[28] D. Querlioz, O. Bichler, A. F. Vincent, and C. Gamrat, “Bioinspired programming of memory devices for implementing an inference engine,” Proc. IEEE, vol. 103, no. 8, pp. 1398–1416, Aug. 2015.
[29] D. Zhang et al., “All spin artificial neural networks based on compound spintronic synapse and neuron,” IEEE Trans. Biomed. Circuits Syst., vol. 10, no. 4, pp. 828–836, Aug. 2016.
[30] R. Hasan and T. M. Taha, “Enabling back propagation training of memristor crossbar neuromorphic processors,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Beijing, China, Jul. 2014, pp. 21–28.
[31] I. Kataeva, F. Merrikh-Bayat, E. Zamanidoost, and D. Strukov, “Efficient training algorithms for neural networks based on memristive crossbar circuits,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2015, pp. 1–8.
[32] D. Chabi, Z. Wang, W. Zhao, and J.-O. Klein, “On-chip super- vised learning rule for ultra high density neural crossbar using memristor for synapse and neuron,” in Proc. Int. Symp. Nanoscale Archit. (NANOARCH), 2014, pp. 7–12.
[33] S. Kvatinsky, N. Wald, G. Satat, A. Kolodny, U. C. Weiser, and
E. G. Friedman, “MRL–memristor ratioed logic,” in Proc. 13th Int. Workshop Cellular Nanoscale Netw. Appl., Aug. 2012, pp. 1–6.

More information: Peng Lin et al. Three-dimensional memristor circuits as complex neural networks, Nature Electronics (2020). DOI: 10.1038/s41928-020-0397-9


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.