FPGA-optimized Hardware acceleration for Spiking Neural Networks

Artificial intelligence (AI) is gaining success and importance in many different tasks. The growing pervasiveness and complexity of AI systems push researchers towards developing dedicated hardware accelerators. Spiking Neural Networks (SNN) represent a promising solution in this sense since they implement models that are more suitable for a reliable hardware design. Moreover, from a neuroscience perspective, they better emulate a human brain. This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task, using the MNIST as the target dataset. Many techniques are used to minimize the area and to maximize the performance, such as the replacement of the multiplication operation with simple bit shifts and the minimization of the time spent on inactive spikes, useless for the update of neurons’ internal state. The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources and reducing the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.

[1]  Muhammad Shafique,et al.  An overview of next-generation architectures for machine learning: Roadmap, opportunities and challenges in the IoT era , 2018, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[2]  Nida Shahid,et al.  Applications of artificial neural networks in health care organizational decision-making: A scoping review , 2019, PloS one.

[3]  Muhammad Shafique,et al.  Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead , 2020, IEEE Access.

[4]  M. Stacey,et al.  Emerging Roles of the Membrane Potential: Action Beyond the Action Potential , 2018, Front. Physiol..

[5]  Romain Brette,et al.  A Threshold Equation for Action Potential Initiation , 2010, PLoS Comput. Biol..

[6]  Alessandro Savino,et al.  Exploring Deep Learning for In-Field Fault Detection in Microprocessors , 2021, 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[7]  Peter Alfke,et al.  Efficient Shift Registers, LFSR Counters, and Long Pseudo Random Sequence Generators , 1995 .

[8]  M. Grider,et al.  Physiology, Action Potential , 2020 .

[9]  André Luckow,et al.  2016 Ieee International Conference on Big Data (big Data) Deep Learning in the Automotive Industry: Applications and Tools , 2022 .

[10]  Faiq Khalid,et al.  Deep Learning for Edge Computing: Current Trends, Cross-Layer Optimizations, and Open Research Challenges , 2019, 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).

[11]  Yang Lu,et al.  Industry 4.0: A survey on technologies, applications and open research issues , 2017, J. Ind. Inf. Integr..

[12]  A. Hodgkin,et al.  A quantitative description of membrane current and its application to conduction and excitation in nerve , 1952, The Journal of physiology.

[13]  Ninghui Sun,et al.  DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning , 2014, ASPLOS.

[14]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[15]  Matthew Cook,et al.  Unsupervised learning of digit recognition using spike-timing-dependent plasticity , 2015, Front. Comput. Neurosci..

[16]  Romain Brette,et al.  Brian 2: an intuitive and efficient neural simulator , 2019, bioRxiv.

[17]  Eugene M. Izhikevich,et al.  Simple model of spiking neurons , 2003, IEEE Trans. Neural Networks.

[18]  Sixu Li,et al.  A Fast and Energy-Efficient SNN Processor With Adaptive Clock/Event-Driven Computation Scheme and Online Learning , 2021, IEEE Transactions on Circuits and Systems I: Regular Papers.

[19]  Muhammad Shafique,et al.  SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments , 2021, 2021 58th ACM/IEEE Design Automation Conference (DAC).

[20]  Muhammad Shafique,et al.  FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks , 2020, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[21]  Simon R. Schultz,et al.  A parallel spiking neural network simulator , 2009, 2009 International Conference on Field-Programmable Technology.