A General Framework to Map Neural Networks onto Neuromorphic Processor

Bio-inspired neuromorphic hardware is an emerging computing architecture, which features highly parallel and distributed computing elements similar to the functionality of human brain. Recent study shows that neuromorphic hardware can achieve state-of-the-art performance in various cognitive tasks. However, limitations in fabrication technology has led to limitations in fan-in, fan-out, memory capacity, connectivity etc., making neuromorphic chips difficult to program. Neural networks have to satisfy specific constraints in order to be mapped to hardware, which not only requires developers to have knowledge of specific hardware, but also makes training difficult. We proposed a general framework to address above issues. It consists of a workflow to convert an existing neural network to satisfy the hardware constrains while minimizing the error caused by conversion, algorithms to increase hardware resource utilization and minimize on-chip communication cost are also proposed and evaluated. The experimental results show that the framework reduces conversion error to 0.67%, and reduces 53% of communication latency.

[1]  Dong Wang,et al.  Complex Learning in Bio-plausible Memristive Networks , 2015, Scientific Reports.

[2]  Wenguang Chen,et al.  Bridge the Gap between Neural Networks and Neuromorphic Hardware with a Neural Network Compiler , 2017, ASPLOS.

[3]  Donald J. Berndt,et al.  Using Dynamic Time Warping to Find Patterns in Time Series , 1994, KDD Workshop.

[4]  Jim D. Garside,et al.  Overview of the SpiNNaker System Architecture , 2013, IEEE Transactions on Computers.

[5]  Matthew Cook,et al.  Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[6]  Xin Jin,et al.  Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware , 2010 .

[7]  Yoshua Bengio,et al.  Neural Networks with Few Multiplications , 2015, ICLR.

[8]  Qinru Qiu,et al.  Scalable NoC-based Neuromorphic Hardware Learning and Inference , 2018, 2018 International Joint Conference on Neural Networks (IJCNN).

[9]  Yiran Chen,et al.  BSB training scheme implementation on memristor-based circuit , 2013, 2013 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA).

[10]  Pritish Narayanan,et al.  Deep Learning with Limited Numerical Precision , 2015, ICML.

[11]  Qinru Qiu,et al.  Towards memristor based accelerator for sparse matrix vector multiplication , 2016, 2016 IEEE International Symposium on Circuits and Systems (ISCAS).

[12]  Nadine Gottschalk,et al.  Vlsi Physical Design From Graph Partitioning To Timing Closure , 2016 .

[13]  Bo Yuan,et al.  Memristor crossbar-based ultra-efficient next-generation baseband processors , 2017, 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS).

[14]  Wei Liu,et al.  A Neuromorphic Architecture for Context Aware Text Image Recognition , 2016, J. Signal Process. Syst..

[15]  Qing Wu,et al.  Hardware realization of BSB recall function using memristor crossbar arrays , 2012, DAC Design Automation Conference 2012.

[16]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[17]  Tao Zhang,et al.  PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[18]  Chris Eliasmith,et al.  Training Spiking Deep Networks for Neuromorphic Hardware , 2016, ArXiv.

[19]  Ming Zhang,et al.  Darwin: A neuromorphic hardware co-processor based on spiking neural networks , 2017, J. Syst. Archit..

[20]  Wenguang Chen,et al.  NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints , 2016, 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).

[21]  Dong Wang,et al.  Development of a neuromorphic computing system , 2015, 2015 IEEE International Electron Devices Meeting (IEDM).

[22]  Gert Cauwenberghs,et al.  Mapping Generative Models onto a Network of Digital Spiking Neurons , 2015, IEEE Transactions on Biomedical Circuits and Systems.

[23]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[24]  Fabien Alibart,et al.  Plasticity in memristive devices for spiking neural networks , 2015, Front. Neurosci..

[25]  Romain Brette,et al.  Equation-oriented specification of neural models for simulations , 2013, Front. Neuroinform..

[26]  Charles M. Fiduccia,et al.  A linear-time heuristic for improving network partitions , 1988, 25 years of DAC.

[27]  Sander M. Bohte,et al.  Error-backpropagation in temporally encoded networks of spiking neurons , 2000, Neurocomputing.