Input Voltage Mapping Optimized for Resistive Memory-Based Deep Neural Network Hardware

Artificial neural network (ANN) computations based on graphics processing units (GPUs) consume high power. Resistive random-access memory (RRAM) has been gaining attention as a promising technology for implementing power-efficient ANNs, replacing GPU. However, nonlinear <inline-formula> <tex-math notation="LaTeX">${I}$ </tex-math></inline-formula>–<inline-formula> <tex-math notation="LaTeX">${V}$ </tex-math></inline-formula> characteristics of RRAM devices have been limiting its use for ANN implementation. In this letter, we propose a method and a circuit to address issues due to the nonlinear <inline-formula> <tex-math notation="LaTeX">${I}$ </tex-math></inline-formula>–<inline-formula> <tex-math notation="LaTeX">${V}$ </tex-math></inline-formula> characteristics. We demonstrate the feasibility of the method by simulating its application to multiple neural networks, from multi-layer perceptron to deep convolutional neural network based on a typical RRAM model. Results from classifying datasets including ImageNet show that the proposed method produces much higher accuracy than the naive linear mapping for a wide range of nonlinearity.

[1]  Farnood Merrikh-Bayat,et al.  Efficient training algorithms for neural networks based on memristive crossbar circuits , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[2]  G. W. Burr,et al.  Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element , 2015, 2014 IEEE International Electron Devices Meeting.

[3]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[4]  Pritish Narayanan,et al.  Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element , 2014, IEEE Transactions on Electron Devices.

[5]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[6]  Gökmen Tayfun,et al.  Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations , 2016, Front. Neurosci..

[7]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[8]  Yu Wang,et al.  RRAM-Based Analog Approximate Computing , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[9]  Yu Wang,et al.  Low power Convolutional Neural Networks on a chip , 2016, 2016 IEEE International Symposium on Circuits and Systems (ISCAS).

[10]  Hao Jiang,et al.  A memristor-based neuromorphic engine with a current sensing scheme for artificial neural network applications , 2017, 2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC).

[11]  Pritish Narayanan,et al.  Neuromorphic computing using non-volatile memory , 2017 .

[12]  Jinseok Kim,et al.  Deep Neural Network Optimized to Resistive Memory with Nonlinear Current-Voltage Characteristics , 2017, ACM J. Emerg. Technol. Comput. Syst..

[13]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[14]  Shimeng Yu,et al.  A Low Energy Oxide‐Based Electronic Synaptic Device for Neuromorphic Visual Systems with Tolerance to Device Variation , 2013, Advanced materials.

[15]  M. Prezioso,et al.  RRAM-based hardware implementations of artificial neural networks: progress update and challenges ahead , 2016, SPIE OPTO.

[16]  Catherine Graves,et al.  Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[17]  Yu Wang,et al.  Technological Exploration of RRAM Crossbar Array for Matrix-Vector Multiplication , 2015, Journal of Computer Science and Technology.

[18]  Yu Wang,et al.  Technological exploration of RRAM crossbar array for matrix-vector multiplication , 2015, ASP-DAC.

[19]  Yu Wang,et al.  Switched by input: Power efficient structure for RRAM-based convolutional neural network , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[20]  John Paul Strachan,et al.  Dot-product engine as computing memory to accelerate machine learning algorithms , 2016, 2016 17th International Symposium on Quality Electronic Design (ISQED).

[21]  Shimeng Yu,et al.  Emerging Memory Technologies: Recent Trends and Prospects , 2016, IEEE Solid-State Circuits Magazine.