Mapping-aware Biased Training for Accurate Memristor-based Neural Networks

Memristor-based computation-in-memory (CIM) can achieve high energy efficiency by processing the data within the memory, which makes it well-suited for applications like neural networks. However, memristors suffer from conductance variation problem where their programmed conductance values deviate from the desired values. Such variations lead to computational errors that result in degraded inference accuracy in CIM-based neural networks. In this paper, we present a mapping-aware biased training methodology to mitigate the impact of conductance variation on CIM-based neural networks. We first determine which conductance states of the memristor are inherently more immune to variation. The neural network is then trained under the constraint that important weights can only take numeric values which directly get mapped to such favorable states. Simulation results show that our proposed mapping-aware biased training achieves up to 2.4× hardware accuracy compared to the conventional training.

[1]  R. Rovatti,et al.  Combined HW/SW Drift and Variability Mitigation for PCM-Based Analog In-Memory Computing for Neural Network Applications , 2023, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[2]  R. Joshi,et al.  Accurate and Energy-Efficient Bit-Slicing for RRAM-Based Neural Networks , 2023, IEEE Transactions on Emerging Topics in Computational Intelligence.

[3]  Kwang-Ting Cheng,et al.  RVComp: Analog Variation Compensation for RRAM-based In-Memory Computing , 2023, 2023 28th Asia and South Pacific Design Automation Conference (ASP-DAC).

[4]  S. Hamdioui,et al.  System Design for Computation-in-Memory: From Primitive to Complex Functions , 2022, 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC).

[5]  O. Mutlu,et al.  Demeter: A Fast and Energy-Efficient Food Profiler Using Hyperdimensional Computing in Memory , 2022, IEEE Access.

[6]  Nuo Xu,et al.  An Efficient Variation-tolerant Method for RRAM-based Neural Network , 2022, 2022 IEEE 5th International Conference on Electronics Technology (ICET).

[7]  D. Ielmini,et al.  Mitigating read-program variation and IR drop by circuit architecture in RRAM-based neural network accelerators , 2022, 2022 IEEE International Reliability Physics Symposium (IRPS).

[8]  W. Lu,et al.  Device Variation Effects on Neural Network Inference Accuracy in Analog In‐Memory Computing Systems , 2022, Adv. Intell. Syst..

[9]  I-Ting Wang,et al.  Device quantization policy in variation-aware in-memory computing design , 2022, Scientific Reports.

[10]  Rajiv V. Joshi,et al.  SRIF: Scalable and Reliable Integrate and Fire Circuit ADC for Memristor-Based CIM Architectures , 2021, IEEE Transactions on Circuits and Systems I: Regular Papers.

[11]  W. Dehaene,et al.  OxRRAM-Based Analog in-Memory Computing for Deep Neural Network Inference: A Conductance Variability Study , 2021, IEEE Transactions on Electron Devices.

[12]  Daniele Ielmini,et al.  Conductance variations and their impact on the precision of in-memory computing with resistive switching memory (RRAM) , 2021, 2021 IEEE International Reliability Physics Symposium (IRPS).

[13]  Piero Olivo,et al.  Optimized programming algorithms for multilevel RRAM in hardware neural networks , 2021, 2021 IEEE International Reliability Physics Symposium (IRPS).

[14]  Li Jiang,et al.  ITT-RNA: Imperfection Tolerable Training for RRAM-Crossbar-Based Deep Neural-Network Accelerator , 2020, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[15]  Rajiv V. Joshi,et al.  Accurate Inference With Inaccurate RRAM Devices: A Joint Algorithm-Design Solution , 2020, IEEE Journal on Exploratory Solid-State Computational Devices and Circuits.

[16]  E. Eleftheriou,et al.  Memory devices and applications for in-memory computing , 2020, Nature Nanotechnology.

[17]  E. Eleftheriou,et al.  Mixed-Precision Deep Learning Based on Computational Memory , 2020, Frontiers in Neuroscience.

[18]  X. Hu,et al.  Device-Circuit-Architecture Co-Exploration for Computing-in-Memory Neural Accelerators , 2019, IEEE Transactions on Computers.

[19]  Zhengya Zhang,et al.  A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations , 2019, Nature Electronics.

[20]  Evangelos Eleftheriou,et al.  Accurate deep neural network inference using computational phase-change memory , 2019, Nature Communications.

[21]  Shimeng Yu,et al.  Technological Benchmark of Analog Synaptic Devices for Neuroinspired Architectures , 2019, IEEE Design & Test.

[22]  Dejan S. Milojicic,et al.  PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference , 2019, ASPLOS.

[23]  J. Yang,et al.  Efficient and self-adaptive in-situ learning in multilayer memristor neural networks , 2018, Nature Communications.

[24]  Xiaochen Peng,et al.  NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning , 2018, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[25]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[26]  Gregory Cohen,et al.  EMNIST: Extending MNIST to handwritten letters , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[27]  David A. Patterson,et al.  In-datacenter performance analysis of a tensor processing unit , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).

[28]  Yiran Chen,et al.  Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.

[29]  Anupam Chattopadhyay,et al.  Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic , 2016, Scientific Reports.

[30]  Miao Hu,et al.  ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[31]  Hyunsang Hwang,et al.  Multilevel Cell Storage and Resistance Variability in Resistive Random Access Memory , 2016 .

[32]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[33]  M. Reichenbach,et al.  Mitigating the Effects of RRAM Process Variation on the Accuracy of Artificial Neural Networks , 2021, SAMOS.

[34]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.