AND Flash Array Based on Charge Trap Flash for Implementation of Convolutional Neural Networks

Various memory devices have been proposed for implementing synapse devices in neuromorphic systems. In this letter, an AND flash array based on charge trap flash (CTF) memory was proposed. CTF-based synapse devices are particularly suitable for off-chip learning applications because they have excellent reliability and stable multi-level operation characteristics. In addition, we proposed a method to implement convolutional neural networks in the proposed array, and performed system-level simulation using the characteristics of the fabricated device. Finally, we investigated the accuracy degradation of the neuromorphic system related to data retention and proposed a multiple cell mapping scheme to address this degradation issue.

[1]  Young-Ho Lim,et al.  A 3.3 V 32 Mb NAND flash memory with incremental step pulse programming scheme , 1995 .

[2]  Byoungil Lee,et al.  Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. , 2012, Nano letters.

[3]  Farnood Merrikh-Bayat,et al.  Training and operation of an integrated neuromorphic network based on metal-oxide memristors , 2014, Nature.

[4]  H. Kume A 1.28μm^2 contactless memory cell technology for a 3V-only 64Mbit EEPROM , 1992 .

[5]  Jacques-Olivier Klein,et al.  Spin-Transfer Torque Magnetic Memory as a Stochastic Memristive Synapse for Neuromorphic Systems , 2015, IEEE Transactions on Biomedical Circuits and Systems.

[6]  Yuchao Yang,et al.  Memristive Devices and Networks for Brain‐Inspired Computing , 2019, physica status solidi (RRL) – Rapid Research Letters.

[7]  Yusuf Leblebici,et al.  Review of advances in neural networks: Neural design technology stack , 2016, Neurocomputing.

[8]  B. Rajendran,et al.  Neuromorphic Computing Based on Emerging Memory Technologies , 2016, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[9]  T. Tanaka,et al.  A shallow-trench-isolation flash memory technology with a source-bias programming method , 1996, International Electron Devices Meeting. Technical Digest.

[10]  Hoi-Jun Yoo,et al.  UNPU: An Energy-Efficient Deep Neural Network Accelerator With Fully Variable Weight Bit Precision , 2019, IEEE Journal of Solid-State Circuits.

[11]  Sujan Kumar Gonugondla,et al.  A 42pJ/decision 3.12TOPS/W robust in-memory machine learning classifier with on-chip training , 2018, 2018 IEEE International Solid - State Circuits Conference - (ISSCC).

[12]  Yann LeCun,et al.  1.1 Deep Learning Hardware: Past, Present, and Future , 2019, 2019 IEEE International Solid- State Circuits Conference - (ISSCC).

[13]  Pritish Narayanan,et al.  Neuromorphic computing using non-volatile memory , 2017 .

[14]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[15]  Carver A. Mead,et al.  Neuromorphic electronic systems , 1990, Proc. IEEE.

[16]  Jong-Ho Lee,et al.  Synaptic Devices Based on 3-D AND Flash Memory Architecture for Neuromorphic Computing , 2019, 2019 IEEE 11th International Memory Workshop (IMW).

[17]  Byung-Gook Park,et al.  3-D Stacked Synapse Array Based on Charge-Trap Flash Memory for Implementation of Deep Neural Networks , 2019, IEEE Transactions on Electron Devices.

[18]  Byung-Gook Park,et al.  Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system , 2017, Nanotechnology.

[19]  Byung-Gook Park,et al.  3-D Floating-Gate Synapse Array With Spike-Time-Dependent Plasticity , 2018, IEEE Transactions on Electron Devices.

[20]  Zenghui Wang,et al.  Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review , 2017, Neural Computation.

[21]  Xiaoyu Sun,et al.  Impact of Non-Ideal Characteristics of Resistive Synaptic Devices on Implementing Convolutional Neural Networks , 2019, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[22]  Lei He,et al.  OPU: An FPGA-Based Overlay Processor for Convolutional Neural Networks , 2020, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.