HFNet: A CNN Architecture Co-designed for Neuromorphic Hardware With a Crossbar Array of Synapses

The hardware-software co-optimization of neural network architectures is a field of research that emerged with the advent of commercial neuromorphic chips, such as the IBM TrueNorth and Intel Loihi. Development of simulation and automated mapping software tools in tandem with the design of neuromorphic hardware, whilst taking into consideration the hardware constraints, will play an increasingly significant role in deployment of system-level applications. This paper illustrates the importance and benefits of co-design of convolutional neural networks (CNN) that are to be mapped onto neuromorphic hardware with a crossbar array of synapses. Toward this end, we first study which convolution techniques are more hardware friendly and propose different mapping techniques for different convolutions. We show that, for a seven-layered CNN, our proposed mapping technique can reduce the number of cores used by 4.9–13.8 times for crossbar sizes ranging from 128 × 256 to 1,024 × 1,024, and this can be compared to the toeplitz method of mapping. We next develop an iterative co-design process for the systematic design of more hardware-friendly CNNs whilst considering hardware constraints, such as core sizes. A python wrapper, developed for the mapping process, is also useful for validating hardware design and studies on traffic volume and energy consumption. Finally, a new neural network dubbed HFNet is proposed using the above co-design process; it achieves a classification accuracy of 71.3% on the IMAGENET dataset (comparable to the VGG-16) but uses 11 times less cores for neuromorphic hardware with core size of 1,024 × 1,024. We also modified the HFNet to fit onto different core sizes and report on the corresponding classification accuracies. Various aspects of the paper are patent pending.

[1]  David Moloney,et al.  Always-on Vision Processing Unit for Mobile Applications , 2015, IEEE Micro.

[2]  Yansong Chua,et al.  MaD: Mapping and debugging framework for implementing deep neural network onto a neuromorphic chip with crossbar array of synapses , 2019, ArXiv.

[3]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[4]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Alessandro Calderoni,et al.  Statistical Fluctuations in HfOx Resistive-Switching Memory: Part I - Set/Reset Variability , 2014, IEEE Transactions on Electron Devices.

[6]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[7]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[8]  Wenguang Chen,et al.  Bridge the Gap between Neural Networks and Neuromorphic Hardware with a Neural Network Compiler , 2017, ASPLOS.

[9]  Engin Ipek,et al.  Making Memristive Neural Network Accelerators Reliable , 2018, 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA).

[10]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[11]  Sumon Kumar Bose,et al.  Is my Neural Network Neuromorphic? Taxonomy, Recent Trends and Future Directions in Neuromorphic Engineering , 2019, 2019 53rd Asilomar Conference on Signals, Systems, and Computers.

[12]  Adam Moody,et al.  REMODEL: Rethinking Deep CNN Models to Detect and Count on a NeuroSynaptic System , 2019, Front. Neurosci..

[13]  Catherine Graves,et al.  Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication , 2016, 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC).

[14]  Chris Yakopcic,et al.  Memristor crossbar deep network implementation based on a Convolutional neural network , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[15]  Pierre Yger,et al.  PyNN: A Common Interface for Neuronal Network Simulators , 2008, Front. Neuroinform..

[16]  Iulia-Alexandra Lungu,et al.  Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks , 2016, ArXiv.

[17]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[18]  Andrew S. Cassidy,et al.  Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[19]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[20]  Farnood Merrikh-Bayat,et al.  Training and operation of an integrated neuromorphic network based on metal-oxide memristors , 2014, Nature.

[21]  Thomas Brox,et al.  Striving for Simplicity: The All Convolutional Net , 2014, ICLR.

[22]  Myron Flickner,et al.  Structured Convolution Matrices for Energy-efficient Deep learning , 2016, ArXiv.

[23]  Yansong Chua,et al.  Hardware-friendly Neural Network Architecture for Neuromorphic Computing , 2019, ArXiv.

[24]  Wenguang Chen,et al.  NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints , 2016, 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).

[25]  Andrew S. Cassidy,et al.  Convolutional networks for fast, energy-efficient neuromorphic computing , 2016, Proceedings of the National Academy of Sciences.

[26]  David A. Patterson,et al.  In-datacenter performance analysis of a tensor processing unit , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).

[27]  Yuan Xie,et al.  Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey , 2020, Proceedings of the IEEE.

[28]  Arindam Basu,et al.  Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning , 2020, IEEE Transactions on Biomedical Circuits and Systems.

[29]  Arindam Basu,et al.  Low-Power, Adaptive Neuromorphic Systems: Recent Progress and Future Directions , 2018, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[30]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[31]  Minjae Lee,et al.  Fault tolerance analysis of digital feed-forward deep neural networks , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[32]  Xiangyu Zhang,et al.  ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[33]  Chris Yakopcic,et al.  Extremely parallel memristor crossbar architecture for convolutional neural network implementation , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[34]  Kwabena Boahen,et al.  Extending the neural engineering framework for nonideal silicon synapses , 2017, 2017 IEEE International Symposium on Circuits and Systems (ISCAS).

[35]  Kay Chen Tan,et al.  A Tandem Learning Rule for Efficient and Rapid Inference on Deep Spiking Neural Networks. , 2019 .

[36]  Mingguo Zhao,et al.  Towards artificial general intelligence with hybrid Tianjic chip architecture , 2019, Nature.

[37]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[38]  S. Ambrogio,et al.  Statistical Fluctuations in HfOx Resistive-Switching Memory: Part II—Random Telegraph Noise , 2014, IEEE Transactions on Electron Devices.

[39]  Lei Deng,et al.  Fast Object Tracking on a Many-Core Neural Network Chip , 2018, Front. Neurosci..

[40]  Kaushik Roy,et al.  Towards spike-based machine intelligence with neuromorphic computing , 2019, Nature.

[41]  George K. Thiruvathukal,et al.  Low-Power Computer Vision: Status, Challenges, Opportunities , 2019, ArXiv.

[42]  George K. Thiruvathukal,et al.  Low-Power Computer Vision: Status, Challenges, and Opportunities , 2019, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.