Low-Consumption Neuromorphic Memristor Architecture Based on Convolutional Neural Networks

With the rapid development of VLSI industry, the research of intelligent applications moves towards IoT edge computing. While the power consumption and area cost of deep neural networks usually exceed the hardware limitation of edge devices. In this paper, we propose a low-power neural network architecture to address such problem. We simplify the current popular convolutional neural networks structure, and utilize the memristor crossbar to store weights to execute convolution operation in parallel, and we present the spiking convolutional neural networks. At the same time, we proposed a performance metrics V to help provide design guidelines for choosing the parameters of the network.

[1]  Tara N. Sainath,et al.  Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.

[2]  Narayan Srinivasa,et al.  A functional hybrid memristor crossbar-array/CMOS system for data storage and neuromorphic applications. , 2012, Nano letters.

[3]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[4]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[5]  Taras Iakymchuk,et al.  Simplified spiking neural network architecture and STDP learning algorithm applied to image classification , 2015, EURASIP J. Image Video Process..

[6]  Jürgen Schmidhuber,et al.  Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Carver A. Mead,et al.  Neuromorphic electronic systems , 1990, Proc. IEEE.

[8]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[9]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[10]  Tao Zhang,et al.  PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[11]  Yoshiki Uchikawa,et al.  On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm , 1992, IEEE Trans. Neural Networks.

[12]  Yuan Taur,et al.  Fundamentals of Modern VLSI Devices , 1998 .

[13]  Catherine Graves,et al.  Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[14]  M.J.J. Holt,et al.  Convergence of back-propagation in neural networks using a log-likelihood cost function , 1990 .

[15]  Farnood Merrikh-Bayat,et al.  Training and operation of an integrated neuromorphic network based on metal-oxide memristors , 2014, Nature.

[16]  Byoung Hun Lee,et al.  Nanoscale RRAM-based synaptic electronics: toward a neuromorphic computing device , 2013, Nanotechnology.