Reducing Impact of CNFET Process Imperfections on Shape of Activation Function by Using Connection Pruning and Approximate Neuron Circuit

Deep Neural Networks (DNNs) based on Carbon nanotube field effect transistor (CNFET) technology can leverage the potential energy benefits of CNFET based technology in comparison to conventional Si technology. However, like other emerging materials based technologies, the current fabrication processes for CNFETs lack the quality, resulting in CNFETs suffering from process imperfections, consequently degradation in circuit-level performance. Such imperfections will cause timing failure and distort the shape of non-linear activation functions, which are vital in DNN, leading to significant degradation in classification accuracy. We utilize pruning of synaptic weights which combined with proposed approximate neuron circuit significantly reduces the chance of timing failure, and achieve better frequency of operation (speed), even using highly imperfect process. In our example, the proposed configuration with approximate neuron and pruning at a high imperfect process $(PCNT_{open}=\ 40\%)$, in comparison to base configuration of precise neuron and no pruning with ideal process $(PCNT_{open}=\ 0\%)$, achieves peak accuracy only 0.19% less, but significant energy-delay-product (EDP) advantage (56.7% less), at no area penalty.

[1]  M. Engel,et al.  High-Performance p-Type Black Phosphorus Transistor with Scandium Contact. , 2016, ACS nano.

[2]  Kaship Sheikh,et al.  Methodology to Capture Statistical Effect of Process Imperfections on Glitch Suppression in CNFET Circuits and to Improve by Using Approximate Circuits , 2018, ACM Great Lakes Symposium on VLSI.

[3]  Gu-Yeon Wei,et al.  Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[4]  King Hann Lim,et al.  Investigation of activation functions in deep belief network , 2017, 2017 2nd International Conference on Control and Robotics Engineering (ICCRE).

[5]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[6]  Jianshi Tang,et al.  High-speed logic integrated circuits with solution-processed self-assembled carbon nanotubes. , 2017, Nature nanotechnology.

[7]  Spatially Selective, High-Density Placement of Polyfluorene-Sorted Semiconducting Carbon Nanotubes in Organic Solvents. , 2017, ACS nano.

[8]  H.-S. Philip Wong,et al.  Stanford Virtual-Source Carbon Nanotube Field-Effect Transistors Model , 2015 .

[9]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Jan M. Rabaey,et al.  Brain-inspired computing exploiting carbon nanotube FETs and resistive RAM: Hyperdimensional computing case study , 2018, 2018 IEEE International Solid - State Circuits Conference - (ISSCC).

[11]  W. Haensch,et al.  High-density integration of carbon nanotubes via chemical self-assembly. , 2012, Nature nanotechnology.

[12]  Moon J. Kim,et al.  MoS2 transistors with 1-nanometer gate lengths , 2016, Science.

[13]  Vivienne Sze,et al.  Efficient Processing of Deep Neural Networks: A Tutorial and Survey , 2017, Proceedings of the IEEE.

[14]  S. Iijima Helical microtubules of graphitic carbon , 1991, Nature.

[15]  Olivier Temam,et al.  Leveraging the Error Resilience of Neural Networks for Designing Highly Energy Efficient Accelerators , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[16]  Geoffrey Zweig,et al.  Recent advances in deep learning for speech research at Microsoft , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[17]  Gert Cauwenberghs,et al.  Gibbs sampling with low-power spiking digital neurons , 2015, 2015 IEEE International Symposium on Circuits and Systems (ISCAS).

[18]  Yong Zhang,et al.  An energy efficient approximate adder with carry skip for error resilient neuromorphic VLSI systems , 2013, 2013 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[19]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[20]  Peng Li,et al.  Energy Efficient Approximate Arithmetic for Error Resilient Neuromorphic Computing , 2015, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[21]  Anantha Chandrakasan,et al.  Modern microprocessor built from complementary carbon nanotube transistors , 2019, Nature.

[22]  Masatoshi Okutomi,et al.  A Novel Inference of a Restricted Boltzmann Machine , 2014, 2014 22nd International Conference on Pattern Recognition.

[23]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.