High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network

Neuromorphic computing systems overcome the limitations of traditional von Neumann computing architectures. These computing systems can be further improved upon by using emerging technologies that are more efficient than CMOS for neural computation. Recent research has demonstrated memristors and spintronic devices in various neural network designs boost efficiency and speed. This paper presents a biologically inspired fully spintronic neuron used in a fully spintronic Hopfield RNN. The network is used to solve tasks, and the results are compared against those of current Hopfield neuromorphic architectures which use emerging technologies.

[1]  Joseph S. Friedman,et al.  Reservoir Computing with Planar Nanomagnet Arrays , 2020, ArXiv.

[2]  Wesley H. Brigner,et al.  SPICE-Only Model for Spin-Transfer Torque Domain Wall MTJ Logic , 2019, IEEE Transactions on Electron Devices.

[3]  Shimeng Yu,et al.  Neuro-Inspired Computing With Emerging Nonvolatile Memorys , 2018, Proceedings of the IEEE.

[4]  Zhigang Zeng,et al.  Memristor-based LSTM network with in situ training and its applications , 2020, Neural Networks.

[5]  A. Ruhan Bevi,et al.  Design of Hopfield network for cryptographic application by spintronic memristors , 2019, Neural Computing and Applications.

[6]  Joseph S. Friedman,et al.  Shape-Based Magnetic Domain Wall Drift for an Artificial Spintronic Leaky Integrate-and-Fire Neuron , 2019, IEEE Transactions on Electron Devices.

[7]  Jinde Cao,et al.  Fixed-time synchronization of delayed memristor-based recurrent neural networks , 2017, Science China Information Sciences.

[8]  Geert Morthier,et al.  Experimental demonstration of reservoir computing on a silicon photonics chip , 2014, Nature Communications.

[9]  Zhigang Zeng,et al.  Synchronization control of a class of memristor-based recurrent neural networks , 2012, Inf. Sci..

[10]  Ian Parberry Scalability of a neural network for the knight's tour problem , 1996, Neurocomputing.

[11]  Christopher H. Bennett,et al.  Three Artificial Spintronic Leaky Integrate-and-Fire Neurons , 2020, SPIN.

[12]  Pekka Orponen An Overview Of The Computational Power Of Recurrent Neural Networks , 2000 .

[13]  E. Kandel The molecular biology of memory storage: a dialog between genes and synapses. , 2001, Bioscience reports.

[14]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[15]  Kaushik Roy,et al.  Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets , 2015, IEEE Transactions on Biomedical Circuits and Systems.

[16]  Samiran Ganguly,et al.  Reservoir Computing using Stochastic p-Bits , 2017, ArXiv.

[17]  Christopher H. Bennett,et al.  Graded-Anisotropy-Induced Magnetic Domain Wall Drift for an Artificial Spintronic Leaky Integrate-and-Fire Neuron , 2019, IEEE Journal on Exploratory Solid-State Computational Devices and Circuits.

[18]  Alex Pappachen James,et al.  A memristor-based long short term memory circuit , 2018 .

[19]  Joseph S. Friedman,et al.  Magnetic domain wall neuron with lateral inhibition , 2018, Journal of Applied Physics.

[20]  Geoffrey S. D. Beach,et al.  Current-induced domain wall motion , 2008 .

[21]  Dmitri E. Nikonov,et al.  Automotion of domain walls for spintronic interconnects , 2014 .

[22]  R. Stanley Williams,et al.  Improved Hopfield Network Optimization Using Manufacturable Three-Terminal Electronic Synapses , 2021, IEEE Transactions on Circuits and Systems I: Regular Papers.

[23]  Qing Wu,et al.  Long short-term memory networks in memristor crossbar arrays , 2018, Nature Machine Intelligence.

[24]  Vijay Balasubramanian,et al.  Heterogeneity and Efficiency in the Brain , 2015, Proceedings of the IEEE.

[25]  Zachary Chase Lipton A Critical Review of Recurrent Neural Networks for Sequence Learning , 2015, ArXiv.

[26]  Joseph S. Friedman,et al.  Energy and Performance Benchmarking of a Domain Wall-Magnetic Tunnel Junction Multibit Adder , 2019, IEEE Journal on Exploratory Solid-State Computational Devices and Circuits.

[27]  Raymond Beausoleil,et al.  Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks , 2020, Nature Electronics.

[28]  Janis Keuper,et al.  Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability , 2016, 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC).

[29]  Jacques-Olivier Klein,et al.  Bioinspired networks with nanoscale memristive devices that combine the unsupervised and supervised learning approaches , 2012, 2012 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).

[30]  Joseph S. Friedman,et al.  Process Variation Model and Analysis for Domain Wall-Magnetic Tunnel Junction Logic , 2020, 2020 IEEE International Symposium on Circuits and Systems (ISCAS).

[31]  Joseph S. Friedman,et al.  CMOS-Free Multilayer Perceptron Enabled by Four-Terminal MTJ Device , 2020, ArXiv.

[32]  Muhammad Ali Imran,et al.  An Overview of Neuromorphic Computing for Artificial Intelligence Enabled Hardware-Based Hopfield Neural Network , 2020, IEEE Access.

[33]  P. Ulinski Fundamentals of Computational Neuroscience , 2007 .

[34]  Shukai Duan,et al.  Small-world Hopfield neural networks with weight salience priority and memristor synapses for digit recognition , 2015, Neural Computing and Applications.

[35]  Damien Querlioz,et al.  Spintronic Nanodevices for Bioinspired Computing , 2016, Proceedings of the IEEE.

[36]  Kaushik Roy,et al.  Spin-Transfer Torque Magnetic neuron for low power neuromorphic computing , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[37]  C. Teuscher,et al.  Modeling and Experimental Demonstration of a Hopfield Network Analog-to-Digital Converter with Hybrid CMOS/Memristor Circuits , 2015, Front. Neurosci..