Energy-efficient FPGA Spiking Neural Accelerators with Supervised and Unsupervised Spike-timing-dependent-Plasticity

The liquid state machine (LSM) is a model of recurrent spiking neural networks (SNNs) and provides an appealing brain-inspired computing paradigm for machine-learning applications. Moreover, operated by processing information directly on spiking events, the LSM is amenable to efficient event-driven hardware implementation. However, training SNNs is, in general, a difficult task as synaptic weights shall be updated based on neural firing activities while achieving a learning objective. In this article, we explore bio-plausible spike-timing-dependent-plasticity (STDP) mechanisms to train liquid state machine models with and without supervision. First, we employ a supervised STDP rule to train the output layer of the LSM while delivering good classification performance. Furthermore, a hardware-friendly unsupervised STDP rule is leveraged to train the recurrent reservoir to further boost the performance. We pursue efficient hardware implementation of FPGA LSM accelerators by performing algorithm-level optimization of the two proposed training rules and exploiting the self-organizing behaviors naturally induced by STDP. Several recurrent spiking neural accelerators are built on a Xilinx Zync ZC-706 platform and trained for speech recognition with the TI46 speech corpus as the benchmark. Adopting the two proposed unsupervised and supervised STDP rules outperforms the recognition accuracy of a competitive non-STDP baseline training algorithm by up to 3.47%.

[1]  Peng Li,et al.  SSO-LSM: A Sparse and Self-Organizing architecture for Liquid State Machine based neural processors , 2016, 2016 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).

[2]  Daniel J. Amit,et al.  Learning in Neural Networks with Material Synapses , 1994, Neural Computation.

[3]  Yong Zhang,et al.  A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[4]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[5]  Andrzej J. Kasinski,et al.  Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting , 2010, Neural Computation.

[6]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[7]  Jean-Pascal Pfister,et al.  Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning , 2005, Neural Computation.

[8]  D. Amit,et al.  Constraints on learning in dynamic synapses , 1992 .

[9]  T. Martin McGinnity,et al.  Neuro-inspired Speech Recognition with Recurrent Spiking Neurons , 2008, ICANN.

[10]  Yannick Hervé,et al.  Implementation Study of an Analog Spiking Neural Network for Assisting Cardiac Delay Prediction in a Cardiac Resynchronization Therapy Device , 2011, IEEE Transactions on Neural Networks.

[11]  B. Schrauwen,et al.  BSA, a fast and accurate spike train encoding scheme , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[12]  G. Bi,et al.  Synaptic modification by correlated activity: Hebb's postulate revisited. , 2001, Annual review of neuroscience.

[13]  Sebastian Urban,et al.  Supervised Spike-Timing-Dependent Plasticity: A Spatiotemporal Neuronal Learning Rule for Function Approximation and Decisions , 2013, Neural Computation.

[14]  Y. Dan,et al.  Spike timing-dependent plasticity: a Hebbian learning rule. , 2008, Annual review of neuroscience.

[15]  L. Abbott,et al.  Competitive Hebbian learning through spike-timing-dependent synaptic plasticity , 2000, Nature Neuroscience.

[16]  Rodrigo Alvarez-Icaza,et al.  Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations , 2014, Proceedings of the IEEE.

[17]  Qian Wang,et al.  Liquid state machine based pattern recognition on FPGA with firing-activity dependent power gating and approximate computing , 2016, 2016 IEEE International Symposium on Circuits and Systems (ISCAS).

[18]  Peng Li,et al.  AP-STDP: A novel self-organizing mechanism for efficient reservoir computing , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[19]  B. Schrauwen,et al.  Isolated word recognition with the Liquid State Machine: a case study , 2005, Inf. Process. Lett..

[20]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[21]  Walter Senn,et al.  Learning Real-World Stimuli in a Neural Network with Spike-Driven Synaptic Dynamics , 2007, Neural Computation.

[22]  Peng Li,et al.  Exploring sparsity of firing activities and clock gating for energy-efficient recurrent spiking neural processors , 2017, 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED).

[23]  Richard F. Lyon,et al.  A computational model of filtering, detection, and compression in the cochlea , 1982, ICASSP.

[24]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[25]  Peng Li,et al.  Online Adaptation and Energy Minimization for Hardware Recurrent Spiking Neural Networks , 2018, ACM J. Emerg. Technol. Comput. Syst..

[26]  Peng Li,et al.  Calcium-modulated supervised spike-timing-dependent plasticity for readout training and sparsification of the liquid state machine , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).