A Probabilistic Synapse With Strained MTJs for Spiking Neural Networks

Spiking neural networks (SNNs) are of interest for applications for which conventional computing suffers from the nearly insurmountable memory–processor bottleneck. This paper presents a stochastic SNN architecture that is based on specialized logic-in-memory synaptic units to create a unique processing system that offers massively parallel processing power. Our proposed synaptic unit consists of strained magnetic tunnel junction (MTJ) devices and transistors. MTJs in our synapse are dual purpose, used as both random bit generators and as general-purpose memory. Our neurons are modeled as integrate-and-fire components with thresholding and refraction. Our circuit is implemented using CMOS 28-nm technology that is compatible with the MTJ technology. Our design shows that the required area for the proposed synapse is only $3.64~\mu \text {m}^{2}/\text {unit}$ . When idle, the synapse consumes 675 pW. When firing, the energy required to propagate a spike is 8.87 fJ. We then demonstrate an SNN that learns (without supervision) and classifies handwritten digits of the MNIST database. Simulation results show that our network presents high classification efficiency even in the presence of fabrication variability.

[1]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[2]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[3]  Jian Sun,et al.  Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[4]  Nikola K. Kasabov,et al.  To spike or not to spike: A probabilistic spiking neuron model , 2010, Neural Networks.

[5]  Gert Cauwenberghs,et al.  Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons , 2001, Neural Networks.

[6]  Zhengya Zhang,et al.  A Sparse Coding Neural Network ASIC With On-Chip Learning for Feature Extraction and Encoding , 2015, IEEE Journal of Solid-State Circuits.

[7]  Jonathan Z. Sun,et al.  Spin angular momentum transfer in current-perpendicular nanomagnetic junctions , 2006, IBM J. Res. Dev..

[8]  Fong Pong,et al.  Missing the Memory Wall: The Case for Processor/Memory Integration , 1996, 23rd Annual International Symposium on Computer Architecture (ISCA'96).

[9]  Farnood Merrikh-Bayat,et al.  Training and operation of an integrated neuromorphic network based on metal-oxide memristors , 2014, Nature.

[10]  Jilles Vreeken,et al.  Spiking neural networks, an introduction , 2003 .

[11]  Shimeng Yu,et al.  Synaptic electronics: materials, devices and applications , 2013, Nanotechnology.

[12]  Zhengya Zhang,et al.  A 640M pixel/s 3.65mW sparse event-driven neuromorphic object recognition processor with on-chip learning , 2015, 2015 Symposium on VLSI Circuits (VLSI Circuits).

[13]  Eugene M. Izhikevich,et al.  Which model to use for cortical spiking neurons? , 2004, IEEE Transactions on Neural Networks.

[14]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[15]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[16]  Wayne Luk,et al.  A comparison of CPUs, GPUs, FPGAs, and massively parallel processor arrays for random number generation , 2009, FPGA '09.

[17]  L. Abbott,et al.  Competitive Hebbian learning through spike-timing-dependent synaptic plasticity , 2000, Nature Neuroscience.

[18]  Jayasimha Atulasimha,et al.  Experimental Demonstration of Complete 180° Reversal of Magnetization in Isolated Co Nanomagnets on a PMN-PT Substrate with Voltage Generated Strain. , 2016, Nano letters.

[19]  S. Parkin,et al.  Magnetic Tunnel Junctions , 2007 .

[20]  Samuel Nascimento Pagliarini,et al.  A self-calibrating sense amplifier for a true random number generator using hybrid FinFET-straintronic MTJ , 2017, 2017 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).

[21]  Stuart Haber,et al.  A VLSI-efficient technique for generating multiple uncorrelated noise sources and its application to stochastic neural networks , 1991 .

[22]  Pinaki Mazumder,et al.  Straintronics-Based True Random Number Generator for High-Speed and Energy-Limited Applications , 2016, IEEE Transactions on Magnetics.

[23]  James D. Warnock Circuit design challenges at the 14nm technology node , 2011, 2011 48th ACM/EDAC/IEEE Design Automation Conference (DAC).

[24]  Wulfram Gerstner,et al.  A neuronal learning rule for sub-millisecond temporal coding , 1996, Nature.

[25]  Milos Drutarovský,et al.  True Random Number Generator Embedded in Reconfigurable Hardware , 2002, CHES.

[26]  David A. Patterson,et al.  Computer Architecture: A Quantitative Approach , 1969 .

[27]  Supriyo Bandyopadhyay,et al.  Complete magnetization reversal in a magnetostrictive nanomagnet with voltage-generated stress: A reliable energy-efficient non-volatile magneto-elastic memory , 2014 .

[28]  H. Kim,et al.  RRAM-based synapse for neuromorphic system with pattern recognition function , 2012, 2012 International Electron Devices Meeting.

[29]  D. Querlioz,et al.  Immunity to Device Variations in a Spiking Neural Network With Memristive Nanodevices , 2013, IEEE Transactions on Nanotechnology.

[30]  Bernard Dieny,et al.  Magnetoresistive Random Access Memory , 2016, Proceedings of the IEEE.

[31]  Romain Brette,et al.  Neuroinformatics Original Research Article Brian: a Simulator for Spiking Neural Networks in Python , 2022 .

[32]  Kang L. Wang,et al.  Low-power non-volatile spintronic memory: STT-RAM and beyond , 2013 .

[33]  Rufin van Rullen,et al.  Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex , 2001, Neural Computation.

[34]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[35]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[36]  R. Kempter,et al.  Hebbian learning and spiking neurons , 1999 .

[37]  Sudipta Bhuin,et al.  Strained MTJs with latch-based sensing for stochastic computing , 2017, 2017 IEEE 17th International Conference on Nanotechnology (IEEE-NANO).

[38]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[39]  Yong Liu,et al.  A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons , 2011, 2011 IEEE Custom Integrated Circuits Conference (CICC).

[40]  Weisheng Zhao,et al.  High Speed, High Stability and Low Power Sensing Amplifier for MTJ/CMOS Hybrid Logic Circuits , 2009, IEEE Transactions on Magnetics.

[41]  Simon J. Thorpe,et al.  Spike arrival times: A highly efficient coding scheme for neural networks , 1990 .