Cortical Processing with Thermodynamic-RAM

AHaH computing forms a theoretical framework from which a biologically-inspired type of computing architecture can be built where, unlike von Neumann systems, memory and processor are physically combined. In this paper we report on an incremental step beyond the theoretical framework of AHaH computing toward the development of a memristorbased physical neural processing unit (NPU), which we call Thermodynamic-RAM (kT-RAM). While the power consumption and speed dominance of such an NPU over von Neumann architectures for machine learning applications is well appreciated, Thermodynamic-RAM offers several advantages over other hardware approaches to adaptation and learning. Benefits include general-purpose use, a simple yet flexible instruction set and easy integration into existing digital platforms. We present a high level design of kT-RAM and a formal definition of its instruction set. We report the completion of a kT-RAM emulator and the successful port of all previous machine learning benchmark applications including unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization. Lastly, we extend a previous MNIST hand written digits benchmark application, to show that an extra step of reading the synaptic states of AHaH nodes during the train phase (healing) alone results in plasticity that improves the classifier’s performance, bumping our best F1 score up to 99.5%.

[1]  A. Asuncion,et al.  UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences , 2007 .

[2]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[3]  C. W. Ragsdale,et al.  Cell-type homologies and the origins of the neocortex , 2012, Proceedings of the National Academy of Sciences.

[4]  Siddharth Gaba,et al.  Synaptic behaviors and modeling of a metal oxide memristive device , 2011 .

[5]  Bruce McCormick Applying Cognitive Memory to CyberSecurity , 2014, Network Science and Cybersecurity.

[6]  Christofer Toumazou,et al.  A review on memristive devices and applications , 2010, 2010 17th IEEE International Conference on Electronics, Circuits and Systems.

[7]  Fabio L. Traversa,et al.  Universal Memcomputing Machines , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[8]  X. Miao,et al.  Ultrafast Synaptic Events in a Chalcogenide Memristor , 2013, Scientific Reports.

[9]  Jing Zhang,et al.  AgInSbTe memristor with gradual resistance tuning , 2013 .

[10]  A. Thomas,et al.  Memristor-based neural networks , 2013 .

[11]  David Moore,et al.  Silver chalcogenide based memristor devices , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[12]  Andreas G. Andreou,et al.  Neuromorphic Engineering: From Neural Systems to Brain-Like Engineered Systems , 2013, Neural Networks.

[13]  M. A. Nugent,et al.  AHaH Computing–From Metastable Switches to Attractors to Machine Learning , 2014, PloS one.

[14]  Rodrigo Alvarez-Icaza,et al.  Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations , 2014, Proceedings of the IEEE.

[15]  Jim D. Garside,et al.  SpiNNaker: Fault tolerance in a power- and area- constrained large-scale neuromimetic architecture , 2013, Parallel Comput..

[16]  Wei Lu,et al.  Short-term Memory to Long-term Memory Transition in a Nanoscale Memristor , 2022 .

[17]  Andrew S. Cassidy,et al.  Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).