A memristor-based convolutional neural network with full parallelization architecture

This paper proposes Full-Parallel Convolutional Neural Networks (FP-CNN) for specific target recognition, which utilize the analog memristive array circuits to carry out the vector-matrix multiplication, and generate multiple output feature maps in one single processing cycle. Compared with ReLU and Tanh function, we adopt the absolute activation function innovatively to reduce the network scale dramatically, which can achieve 99% recognition accuracy rate with only three layers. Furthermore, we propose a performance metrics function to resize the scale of the FP-CNN for solving different classification tasks. With the help of such design guidelines, the FP-CNN can still achieve over 96% recognition accuracy under the condition of 95% yield of memristor crossbar array and 0.5% Single-Pole-Double-Throw switches (SPDT) noise.

[1]  Lei Wang,et al.  Design of Low-offset Low-power CMOS Amplifier for Biosensor Application , 2009 .

[2]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[3]  L. Chua Memristor-The missing circuit element , 1971 .

[4]  Chris Yakopcic,et al.  Extremely parallel memristor crossbar architecture for convolutional neural network implementation , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[5]  Catherine Graves,et al.  Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[6]  Ligang Gao,et al.  High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm , 2011, Nanotechnology.

[7]  Taras Iakymchuk,et al.  Simplified spiking neural network architecture and STDP learning algorithm applied to image classification , 2015, EURASIP J. Image Video Process..

[8]  Hassan Hajghassem,et al.  A wide range monolithic pHEMT SPDT switch , 2014 .

[9]  Zheng Li,et al.  Continuous real-world inputs can open up alternative accelerator designs , 2013, ISCA.

[10]  Kaushik Roy,et al.  TraNNsformer: Neural network transformation for memristive crossbar based neuromorphic system design , 2017, 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[11]  Shawki Areibi,et al.  Deep Learning on FPGAs: Past, Present, and Future , 2016, ArXiv.

[12]  Chris Yakopcic,et al.  Memristor crossbar deep network implementation based on a Convolutional neural network , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).