Investigation of Neural Networks Using Synapse Arrays Based on Gated Schottky Diodes

A synapse device array based on the gated Schottky diodes (GSDs) is fabricated. This GSD operates in reverse mode, so the synapse current is considerably low, helping to implement a low-power hardware-based neural networks (HNNs). The reverse Schottky diode current, which represents the synaptic weight, is modulated by applying program or erase pulses. In this GSD, the reverse diode current is used as the synapse current, but the forward diode current is cut off. This is an important feature to prevent the sneak path problem in the crossbar array. A synapse array consisting of fabricated 200 GSDs shows a variation (σ/μ) of 0.34, 0.22, and 0.14 for three different synaptic weight states. By using this GSD array, we perform the vector-by-matrix multiplication, and evaluate the inference accuracy of MNIST. As a baseline accuracy for MNIST classification, a convolutional neural network similar to Lenet-5 is designed and gives an accuracy of 99.53%. Normalization method is applied to the weights trained in the network to map the weights into the conductance range of synapse device. The adaptive weight quantization is then applied to the normalized weights. We verify that the HNN using GSDs works well in comparison to the baseline network even in the presence of non-ideal characteristics of synapse devices.

[1]  F. Merrikh Bayat,et al.  Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology , 2017, 2017 IEEE International Electron Devices Meeting (IEDM).

[2]  Byung-Gook Park,et al.  Adaptive Weight Quantization Method for Nonlinear Synaptic Devices , 2019, IEEE Transactions on Electron Devices.

[3]  Shimeng Yu,et al.  Random sparse adaptation for accurate inference with inaccurate multi-level RRAM arrays , 2017, 2017 IEEE International Electron Devices Meeting (IEDM).

[4]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[5]  Shih-Chii Liu,et al.  Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification , 2017, Front. Neurosci..

[6]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[7]  Pritish Narayanan,et al.  Equivalent-accuracy accelerated neural-network training using analogue memory , 2018, Nature.

[8]  Jong-Ho Lee,et al.  Hardware-based Neural Networks using a Gated Schottky Diode as a Synapse Device , 2018, 2018 IEEE International Symposium on Circuits and Systems (ISCAS).

[9]  Ligang Gao,et al.  Programming protocol optimization for analog weight tuning in resistive memories , 2015, 2015 73rd Annual Device Research Conference (DRC).

[10]  Pritish Narayanan,et al.  Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element , 2014, IEEE Transactions on Electron Devices.

[11]  Shimeng Yu,et al.  Neuro-Inspired Computing With Emerging Nonvolatile Memorys , 2018, Proceedings of the IEEE.

[12]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[13]  Jong-Ho Lee,et al.  Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices , 2018, Neural Computing and Applications.

[14]  Shimeng Yu,et al.  Synaptic electronics: materials, devices and applications , 2013, Nanotechnology.

[15]  Ligang Gao,et al.  High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm , 2011, Nanotechnology.

[16]  Song Han,et al.  Trained Ternary Quantization , 2016, ICLR.

[17]  Byung-Gook Park,et al.  High-Density and Near-Linear Synaptic Device Based on a Reconfigurable Gated Schottky Diode , 2017, IEEE Electron Device Letters.

[18]  Jong-Ho Lee,et al.  Emerging memory technologies for neuromorphic computing , 2018, Nanotechnology.

[19]  Jae-Joon Kim,et al.  Input Voltage Mapping Optimized for Resistive Memory-Based Deep Neural Network Hardware , 2017, IEEE Electron Device Letters.

[20]  Meng-Fan Chang,et al.  A 462GOPs/J RRAM-based nonvolatile intelligent processor for energy harvesting IoE system featuring nonvolatile logics and processing-in-memory , 2017, VLSIT 2017.