An Introduction to Spiking Neural Networks: Probabilistic Models, Learning Rules, and Applications.

Spiking Neural Networks (SNNs) are distributed trainable systems whose computing elements, or neurons, are characterized by internal analog dynamics and by digital and sparse synaptic communications. The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by hardware implementations that have demonstrated significant energy reductions as compared to conventional Artificial Neural Networks (ANNs). Most existing training algorithms for SNNs have been designed either for biological plausibility or through conversion from pre-trained ANNs via rate encoding. This paper aims at providing an introduction to SNNs by focusing on a probabilistic signal processing methodology that enables the direct derivation of learning rules leveraging the unique time encoding capabilities of SNNs. To this end, the paper adopts discrete-time probabilistic models for networked spiking neurons, and it derives supervised and unsupervised learning rules from first principles by using variational inference. Examples and open research problems are also provided.

[1]  Jonathan J. Hull,et al.  A Database for Handwritten Text Recognition Research , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Wolfgang Maass,et al.  Networks of Spiking Neurons: The Third Generation of Neural Network Models , 1996, Electron. Colloquium Comput. Complex..

[3]  David Kappel,et al.  Network Plasticity as Bayesian Inference , 2015, PLoS Comput. Biol..

[4]  László Tóth,et al.  Perfect recovery and sensitivity analysis of time encoded bandlimited signals , 2004, IEEE Transactions on Circuits and Systems I: Regular Papers.

[5]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[6]  Wolfgang Maass,et al.  Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons , 2011, PLoS Comput. Biol..

[7]  E. Bienenstock,et al.  Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex , 1982, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[8]  Wolfgang Maass,et al.  Noise as a Resource for Computation and Learning in Networks of Spiking Neurons , 2014, Proceedings of the IEEE.

[9]  Niraj S. Desai,et al.  Homeostatic Plasticity and STDP: Keeping a Neuron's Cool in a Fluctuating World , 2010, Front. Syn. Neurosci..

[10]  Radford M. Neal Connectionist Learning of Belief Networks , 1992, Artif. Intell..

[11]  W. Senn,et al.  Matching Recall and Storage in Sequence Learning with Spiking Neural Networks , 2013, The Journal of Neuroscience.

[12]  Takayuki Osogami,et al.  Boltzmann machines for time-series , 2017, ArXiv.

[13]  Wulfram Gerstner,et al.  Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons , 2013, PLoS Comput. Biol..

[14]  A. Faisal,et al.  Noise in the nervous system , 2008, Nature Reviews Neuroscience.

[15]  Robert A. Legenstein,et al.  Long short-term memory and Learning-to-learn in networks of spiking neurons , 2018, NeurIPS.

[16]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[17]  Hong Wang,et al.  Loihi: A Neuromorphic Manycore Processor with On-Chip Learning , 2018, IEEE Micro.

[18]  Wulfram Gerstner,et al.  SPIKING NEURON MODELS Single Neurons , Populations , Plasticity , 2002 .

[19]  Peter Dayan,et al.  Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems , 2001 .

[20]  Wulfram Gerstner,et al.  Stochastic variational learning in recurrent spiking networks , 2014, Front. Comput. Neurosci..

[21]  Daniel A. Braun,et al.  A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker , 2015, Neural Computation.

[22]  Wofgang Maas,et al.  Networks of spiking neurons: the third generation of neural network models , 1997 .

[23]  W. Gerstner,et al.  Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules , 2016, Front. Neural Circuits.

[24]  Sander M. Bohte,et al.  Unsupervised clustering with spiking neurons by sparse temporal coding and multilayer RBF networks , 2002, IEEE Trans. Neural Networks.

[25]  Gert Cauwenberghs,et al.  A Learning Framework for Winner-Take-All Networks with Stochastic Synapses , 2017, Neural Computation.

[26]  Emre Neftci,et al.  Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks , 2019, IEEE Signal Processing Magazine.

[27]  Sander M. Bohte,et al.  Computing with Spiking Neuron Networks , 2012, Handbook of Natural Computing.

[28]  Tobi Delbrück,et al.  Training Deep Spiking Neural Networks Using Backpropagation , 2016, Front. Neurosci..

[29]  Osvaldo Simeone,et al.  A Brief Introduction to Machine Learning for Engineers , 2017, Found. Trends Signal Process..

[30]  Osvaldo Simeone,et al.  Learning First-to-Spike Policies for Neuromorphic Control Using Policy Gradients , 2018, 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).

[31]  Lei Deng,et al.  Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks , 2017, Front. Neurosci..

[32]  Chris Eliasmith,et al.  Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems , 2004, IEEE Transactions on Neural Networks.

[33]  Eero P. Simoncelli,et al.  Spatio-temporal correlations and visual signalling in a complete neuronal population , 2008, Nature.

[34]  André Grüning,et al.  Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks , 2015, Neural Computation.

[35]  Narayan Srinivasa,et al.  Low-Power Neuromorphic Hardware for Signal Processing Applications: A review of architectural and system-level design approaches , 2019, IEEE Signal Processing Magazine.

[36]  Shih-Chii Liu,et al.  Conversion of analog to spiking neural networks using sparse temporal coding , 2018, 2018 IEEE International Symposium on Circuits and Systems (ISCAS).

[37]  Jean-Pascal Pfister,et al.  Sequence learning with hidden units in spiking neural networks , 2011, NIPS.

[38]  Max Welling,et al.  Deep Spiking Networks , 2016, ArXiv.

[39]  Nikola K. Kasabov,et al.  To spike or not to spike: A probabilistic spiking neuron model , 2010, Neural Networks.

[40]  Karol Gregor,et al.  Neural Variational Inference and Learning in Belief Networks , 2014, ICML.

[41]  Peter Blouw,et al.  Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware , 2018, NICE '19.

[42]  Osvaldo Simeone,et al.  Training Dynamic Exponential Family Models with Causal and Lateral Dependencies for Generalized Neuromorphic Computing , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[43]  Geoffrey E. Hinton,et al.  Spiking Boltzmann Machines , 1999, NIPS.

[44]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[45]  Johannes Schemmel,et al.  Stochasticity from function - why the Bayesian brain may need no noise , 2018, Neural Networks.