Learning to Predict IR Drop with Effective Training for ReRAM-based Neural Network Hardware

Due to the inevitability of the IR drop problem in passive ReRAM crossbar arrays, finding a software solution that can predict the effect of IR drop without the need of expensive SPICE simulations, is very desirable. In this paper, two simple neural networks are proposed as software solution to predict the effect of IR drop. These networks can be easily integrated in any deep neural network framework to incorporate the IR drop problem during training. As an example, the proposed solution is integrated in BinaryNet framework and the test validation results, done through SPICE simulations, show very high improvement in performance close to the baseline performance, which demonstrates the efficacy of the proposed method. In addition, the proposed solution outperforms the prior work on challenging datasets such as CIFAR10 and SVHN.

[1]  Catherine E. Graves,et al.  Memristor‐Based Analog Computation and Neural Network Classification with a Dot Product Engine , 2018, Advanced materials.

[2]  Huazhong Yang,et al.  Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[3]  Tao Zhang,et al.  PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[4]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, ArXiv.

[5]  Hiroki Nakahara,et al.  On-Chip Memory Based Binarized Convolutional Deep Neural Network Applying Batch Normalization Free Technique on an FPGA , 2017, 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).

[6]  Huanrui Yang,et al.  AtomLayer: A Universal ReRAM-Based CNN Accelerator with Atomic Layer Computation , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[7]  Patrice Y. Simard,et al.  High Performance Convolutional Neural Networks for Document Processing , 2006 .

[8]  Yiran Chen,et al.  PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning , 2017, 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA).

[9]  Miao Hu,et al.  ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[10]  Mohammed E. Fouda,et al.  Mask Technique for Fast and Efficient Training of Binary Resistive Crossbar Arrays , 2019, IEEE Transactions on Nanotechnology.

[11]  Jie Lin,et al.  Noise Injection Adaption: End-to-End ReRAM Crossbar Non-ideal Effect Adaption for Neural Network Mapping , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).

[12]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[13]  J. Yang,et al.  Memristive crossbar arrays for brain-inspired computing , 2019, Nature Materials.