NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi

Spiking Neural Networks (SNNs) are a promising paradigm for efficient event-driven processing of spatio-temporally sparse data streams. SNNs have inspired the design and can take advantage of the emerging class of neuromorphic processors like Intel Loihi. These novel hardware architectures expose a variety of constraints that affect firmware, compiler and algorithm development alike. To enable rapid and flexible development of SNN algorithms on Loihi, we developed NxTF: a programming interface derived from Keras and compiler optimized for mapping deep convolutional SNNs to the multi-core Intel Loihi architecture. We evaluate NxTF on Deep Neural Networks (DNNs) trained directly on spikes as well as models converted from traditional DNNs, processing both sparse event-based and dense frame-based data sets. Further, we assess the effectiveness of the compiler to distribute models across a large number of cores and to compress models by exploiting Loihi’s weight sharing features. Finally, we evaluate model accuracy, energy and time to solution compared to other architectures. The compiler achieves near optimal resource utilization of 80% across 16 Loihi chips for a 28-layer, 4M parameter MobileNet model with input size 128 × 128. In addition, we report the lowest error rate of 8.52% for the CIFAR-10 dataset on neuromorphic hardware, using an off-the-shelf MobileNet.

[1]  Steve B. Furber,et al.  Scalable energy-efficient, low-latency implementations of trained spiking Deep Belief Networks on SpiNNaker , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[2]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[3]  Lawrence D. Jackel,et al.  Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.

[4]  Peter Blouw,et al.  Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware , 2018, NICE '19.

[5]  John Tran,et al.  cuDNN: Efficient Primitives for Deep Learning , 2014, ArXiv.

[6]  Gregory Cohen,et al.  Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades , 2015, Front. Neurosci..

[7]  Kaushik Roy,et al.  Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation , 2020, ICLR.

[8]  Bernabé Linares-Barranco,et al.  Event-driven implementation of deep spiking convolutional neural networks for supervised classification using the SpiNNaker neuromorphic platform , 2020, Neural Networks.

[9]  Hong Wang,et al.  Programming Spiking Neural Networks on Intel’s Loihi , 2018, Computer.

[10]  Garrick Orchard,et al.  Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook , 2021, Proceedings of the IEEE.

[11]  Dharmendra S. Modha,et al.  Backpropagation for Energy-Efficient Neuromorphic Computing , 2015, NIPS.

[12]  Robert A. Legenstein,et al.  Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[13]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[14]  Shih-Chii Liu,et al.  Event-driven deep neural network hardware system for sensor fusion , 2016, 2016 IEEE International Symposium on Circuits and Systems (ISCAS).

[15]  Jim D. Garside,et al.  Overview of the SpiNNaker System Architecture , 2013, IEEE Transactions on Computers.

[16]  Mike E. Davies,et al.  Benchmarks for progress in neuromorphic computing , 2019, Nature Machine Intelligence.

[17]  Shih-Chii Liu,et al.  Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification , 2017, Front. Neurosci..

[18]  Andrew S. Cassidy,et al.  Convolutional networks for fast, energy-efficient neuromorphic computing , 2016, Proceedings of the National Academy of Sciences.

[19]  Shih-Chii Liu,et al.  Conversion of analog to spiking neural networks using sparse temporal coding , 2018, 2018 IEEE International Symposium on Circuits and Systems (ISCAS).

[20]  Garrick Orchard,et al.  Neuromorphic Nearest Neighbor Search Using Intel's Pohoiki Springs , 2020, NICE.

[21]  Trevor Bekolay,et al.  Nengo: a Python tool for building large-scale functional brain models , 2014, Front. Neuroinform..

[22]  Andrew S. Cassidy,et al.  Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[23]  Hesham Mostafa,et al.  Supervised Learning Based on Temporal Coding in Spiking Neural Networks , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[24]  Sander M. Bohte,et al.  Efficient Computation in Adaptive Artificial Spiking Neural Networks , 2017, ArXiv.

[25]  Christian Y. A. Brenninkmeijer,et al.  SpiNNTools: The Execution Engine for the SpiNNaker Platform , 2018, Front. Neurosci..

[26]  C. Eliasmith,et al.  Nengo and Low-Power AI Hardware for Robust, Embedded Neurorobotics , 2020, Frontiers in Neurorobotics.

[27]  Christian Y. A. Brenninkmeijer,et al.  sPyNNaker: A Software Package for Running PyNN Simulations on SpiNNaker , 2018, Front. Neurosci..

[28]  Jianwu Dang,et al.  Constructing Accurate and Efficient Deep Spiking Neural Networks with Double-threshold and Augmented Schemes , 2021, IEEE transactions on neural networks and learning systems.

[29]  Bernabe Linares-Barranco,et al.  Asynchronous Spiking Neurons, the Natural Key to Exploit Temporal Sparsity , 2019, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[30]  Pierre Yger,et al.  PyNN: A Common Interface for Neuronal Network Simulators , 2008, Front. Neuroinform..

[31]  Maurizio Martina,et al.  An Efficient Spiking Neural Network for Recognizing Gestures with a DVS Camera on the Loihi Neuromorphic Processor , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).

[32]  Jose Javier Gonzalez Ortiz,et al.  What is the State of Neural Network Pruning? , 2020, MLSys.

[33]  Garrick Orchard,et al.  SLAYER: Spike Layer Error Reassignment in Time , 2018, NeurIPS.

[34]  Gert Cauwenberghs,et al.  Fast classification using sparsely active spiking networks , 2017, 2017 IEEE International Symposium on Circuits and Systems (ISCAS).

[35]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[36]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[37]  Jeremy Kepner,et al.  Survey and Benchmarking of Machine Learning Accelerators , 2019, 2019 IEEE High Performance Extreme Computing Conference (HPEC).

[38]  Andrew S. Cassidy,et al.  A million spiking-neuron integrated circuit with a scalable communication network and interface , 2014, Science.

[39]  Tobi Delbrück,et al.  A Low Power, Fully Event-Based Gesture Recognition System , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[40]  Qian Liu,et al.  Optimizing the Energy Consumption of Spiking Neural Networks for Neuromorphic Applications , 2020, Frontiers in Neuroscience.