TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents

Autonomous mobile agents require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only inference, thereby hindering the SNNs from achieving further energy efficiency gains in both training and inference. These techniques also restrict the SNNs from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both training and inference, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The ideas of TopSpark include: analyzing the impact of different timesteps on the accuracy; identifying neuron parameters that have a significant impact on accuracy in different timesteps; employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity; and developing a strategy to trade-off accuracy, latency, and energy to meet the design requirements. The results show that, TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x (training) and 3.3x (inference) on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2% of SNNs without timestep reduction.

[1]  Muhammad Abdullah Hanif,et al.  EnforceSNN: Enabling resilient and energy-efficient spiking neural network inference considering approximate DRAMs for embedded systems , 2022, Frontiers in Neuroscience.

[2]  A. Raghunathan,et al.  Layerwise Disaggregated Evaluation of Spiking Neural Networks , 2022, ISLPED.

[3]  Rachmad Vidya Wicaksana Putra,et al.  lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for Efficient Unsupervised Continual Learning on Autonomous Agents , 2022, 2022 International Joint Conference on Neural Networks (IJCNN).

[4]  Lei Deng,et al.  Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions , 2022, 2022 IEEE Custom Integrated Circuits Conference (CICC).

[5]  K. Roy,et al.  DIET-SNN: A Low-Latency Spiking Neural Network With Direct Input Encoding and Leakage and Threshold Optimization , 2021, IEEE Transactions on Neural Networks and Learning Systems.

[6]  Kaushik Roy,et al.  One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency , 2021, ArXiv.

[7]  Muhammad Abdullah Hanif,et al.  ReSpawn: Energy-Efficient Fault-Tolerance for Spiking Neural Networks considering Unreliable Memories , 2021, 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD).

[8]  Muhammad Abdullah Hanif,et al.  SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM , 2021, 2021 58th ACM/IEEE Design Automation Conference (DAC).

[9]  Rachmad Vidya Wicaksana Putra,et al.  SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments , 2021, 2021 58th ACM/IEEE Design Automation Conference (DAC).

[10]  Patric Jensfelt,et al.  Long-Term Exploration in Unknown Dynamic Environments , 2021, 2021 7th International Conference on Automation, Robotics and Applications (ICARA).

[11]  Rachmad Vidya Wicaksana Putra,et al.  FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks , 2020, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[12]  Bo-Hao Chen,et al.  Deep Battery Saver: End-to-End Learning for Power Constrained Contrast Enhancement , 2020, IEEE Transactions on Multimedia.

[13]  Sen Lu,et al.  Exploring the Connection Between Binary and Spiking Neural Networks , 2020, Frontiers in Neuroscience.

[14]  Anand Raghunathan,et al.  Dynamic Spike Bundling for Energy-Efficient Spiking Neural Networks , 2019, 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED).

[15]  Kaushik Roy,et al.  Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks , 2019, Frontiers in Neuroscience.

[16]  Darpan T. Sanghavi,et al.  BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python , 2018, Front. Neuroinform..

[17]  Timothée Masquelier,et al.  Deep Learning in Spiking Neural Networks , 2018, Neural Networks.

[18]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[19]  Kaushik Roy,et al.  STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition , 2017, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[20]  Anand Raghunathan,et al.  Approximate computing for spiking neural networks , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.

[21]  Jason M. Allred,et al.  ASP: Learning to Forget With Adaptive Synaptic Plasticity in Spiking Neural Networks , 2017, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.

[22]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[23]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[24]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..