Studying the interaction of a hidden Markov model with a Bayesian spiking neural network

This paper explores a novel hybrid approach for classifying sequential data such as isolated spoken words. The approach combines a hidden Markov model (HMM) with a spiking neural network (SNN). The HMM, consisting of states and transitions, forms a fixed backbone with nonadaptive transition probabilities. The SNN, however, implements a Bayesian computation by using an appropriately selected spike timing dependency (STDP) learning rule. A separate SNN, each with the same architecture, is associated with each of the p states of the HMM. Because of the STDP tuning, each SNN implements an expectation maximization (EM) algorithm to learn the particular observation probabilities for one particular HMM state. When applied to an isolated spoken word (as a popular sequential data) recognition problem, the hybrid model performs well and efficiently with a desirable accuracy rate. Because of the model's uniqueness and initial success, it warrants further study. Future work intends to broaden its capabilities and improve the biological realism.

[1]  David Pearce,et al.  The aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions , 2000, INTERSPEECH.

[2]  Wei Ji Ma,et al.  Bayesian inference with probabilistic population codes , 2006, Nature Neuroscience.

[3]  Timothée Masquelier,et al.  Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity , 2007, PLoS Comput. Biol..

[4]  Biing-Hwang Juang,et al.  Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.

[5]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[6]  Liam McDaid,et al.  SWAT: A Spiking Neural Network Training Algorithm for Classification Problems , 2010, IEEE Transactions on Neural Networks.

[7]  David Kappel,et al.  STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning , 2014, PLoS Comput. Biol..

[8]  T. Moon The expectation-maximization algorithm , 1996, IEEE Signal Process. Mag..

[9]  P. Berkes,et al.  Statistically Optimal Perception and Learning: from Behavior to Neural Representations , 2022 .

[10]  Wolfgang Maass,et al.  Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity , 2013, PLoS Comput. Biol..

[11]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[12]  Jr. G. Forney,et al.  Viterbi Algorithm , 1973, Encyclopedia of Machine Learning.

[13]  Stefan Wermter,et al.  Spike-timing-dependent synaptic plasticity: from single spikes to spike trains , 2004, Neurocomputing.

[14]  John Yearwood,et al.  A stochastic version of Expectation Maximization algorithm for better estimation of Hidden Markov Model , 2009, Pattern Recognit. Lett..

[15]  T.W. Berger,et al.  Speech recognition based on fundamental functional principles of the brain , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[16]  Timothée Masquelier,et al.  Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).