Exploring sparsity of firing activities and clock gating for energy-efficient recurrent spiking neural processors

As a model of recurrent spiking neural networks, the Liquid State Machine (LSM) offers a powerful brain-inspired computing platform for pattern recognition and machine learning applications. While operated by processing neural spiking activities, the LSM naturally lends itself to an efficient hardware implementation via exploration of typical sparse firing patterns emerged from the recurrent neural network and smart processing of computational tasks that are orchestrated by different firing events at runtime. We explore these opportunities by presenting a LSM processor architecture with integrated on-chip learning and its FPGA implementation. Our LSM processor leverage the sparsity of firing activities to allow for efficient event-driven processing and activity-dependent clock gating. Using the spoken English letters adopted from the TI46 [1] speech recognition corpus as a benchmark, we show that the proposed FPGA-based neural processor system is up to 29% more energy efficient than a baseline LSM processor with little extra hardware overhead.

[1]  Qian Wang,et al.  General-purpose LSM learning processor architecture and theoretically guided design space exploration , 2015, 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS).

[2]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[3]  G. Bi,et al.  Synaptic modification by correlated activity: Hebb's postulate revisited. , 2001, Annual review of neuroscience.

[4]  Yong Zhang,et al.  A Digital Liquid State Machine With Biologically Inspired Learning and Its Application to Speech Recognition , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[5]  Giacomo Indiveri,et al.  Real-Time Classification of Complex Patterns Using Spike-Based Learning in Neuromorphic VLSI , 2009, IEEE Transactions on Biomedical Circuits and Systems.

[6]  Tim Schönauer,et al.  NeuroPipe-Chip: A digital neuro-processor for spiking neural networks , 2002, IEEE Trans. Neural Networks.

[7]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[8]  T. Martin McGinnity,et al.  Neuro-inspired Speech Recognition with Recurrent Spiking Neurons , 2008, ICANN.

[9]  B. Schrauwen,et al.  Isolated word recognition with the Liquid State Machine: a case study , 2005, Inf. Process. Lett..

[10]  B. Schrauwen,et al.  BSA, a fast and accurate spike train encoding scheme , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[11]  Qian Wang,et al.  Liquid state machine based pattern recognition on FPGA with firing-activity dependent power gating and approximate computing , 2016, 2016 IEEE International Symposium on Circuits and Systems (ISCAS).

[12]  Peng Li,et al.  AP-STDP: A novel self-organizing mechanism for efficient reservoir computing , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).

[13]  Peng Li,et al.  SSO-LSM: A Sparse and Self-Organizing architecture for Liquid State Machine based neural processors , 2016, 2016 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).