Low-Power Deep Learning Inference using the SpiNNaker Neuromorphic Platform

With the successes deep neural networks have achieved across a range of applications, researchers have been exploring computational architectures to more efficiently execute their operation. In addition to the prevalent role of graphics processing units (GPUs), many accelerator architectures have emerged. Neuromorphic is one such particular approach which takes inspiration from the brain to guide the computational principles of the architecture including varying levels of biological realism. In this paper we present results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia National Laboratories developed Whetstone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.

[1]  Craig M. Vineyard,et al.  Training deep neural networks for binary communication with the Whetstone method , 2018, Nature Machine Intelligence.

[2]  Johannes Schemmel,et al.  Neuroinformatics Original Research Article Establishing a Novel Modeling Tool: a Python-based Interface for a Neuromorphic Hardware System , 2022 .

[3]  Steve Furber,et al.  Large-scale neuromorphic computing systems , 2016, Journal of neural engineering.

[4]  Jim D. Garside,et al.  Overview of the SpiNNaker System Architecture , 2013, IEEE Transactions on Computers.

[5]  Bernabé Linares-Barranco,et al.  ConvNets experiments on SpiNNaker , 2015, 2015 IEEE International Symposium on Circuits and Systems (ISCAS).

[6]  David A. Patterson,et al.  A New Golden Age in Computer Architecture: Empowering the Machine-Learning Revolution , 2018, IEEE Micro.

[7]  Christian Y. A. Brenninkmeijer,et al.  sPyNNaker: A Software Package for Running PyNN Simulations on SpiNNaker , 2018, Front. Neurosci..

[8]  Terrence C. Stewart,et al.  Large-Scale Synthesis of Functional Spiking Neural Circuits , 2014, Proceedings of the IEEE.

[9]  Shan Sung Liew,et al.  Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems , 2016, Neurocomputing.

[10]  Steve B. Furber,et al.  Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype , 2018, Front. Neurosci..

[11]  Khalil-HaniMohamed,et al.  Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems , 2016 .

[12]  Bertrand A. Maher,et al.  Glow: Graph Lowering Compiler Techniques for Neural Networks , 2018, ArXiv.

[13]  Jason Sanders,et al.  CUDA by example: an introduction to general purpose GPU programming , 2010 .

[14]  Chris Eliasmith,et al.  Training Spiking Deep Networks for Neuromorphic Hardware , 2016, ArXiv.

[15]  Craig M. Vineyard,et al.  Training deep neural networks for binary communication with the Whetstone method , 2019 .