Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feed-forward spiking neuron networks (Spike-Prop [1]) to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs.

[1]  Wolfgang Maass,et al.  Lower Bounds for the Computational Power of Networks of Spiking Neurons , 1996, Neural Computation.

[2]  Lee A. Feldkamp,et al.  Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks , 1994, IEEE Trans. Neural Networks.

[3]  Paul F. M. J. Verschure,et al.  Decoding a Temporal Population Code , 2004, Neural Computation.

[4]  P. Frasconi,et al.  Representation of Finite State Automata in Recurrent Radial Basis Function Networks , 1996, Machine Learning.

[5]  Mike Casey,et al.  The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction , 1996, Neural Computation.

[6]  W. Gerstner,et al.  Time structure of the activity in neural network models. , 1995, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[7]  Wulfram Gerstner,et al.  Spiking neurons , 1999 .

[8]  B. Schrauwen,et al.  Extending SpikeProp , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[9]  Peter Tiňo,et al.  Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches , 1995 .

[10]  Dario Floreano,et al.  Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots , 2001, EvoRobots.

[11]  Yoshua Bengio,et al.  The problem of learning long-term dependencies in recurrent networks , 1993, IEEE International Conference on Neural Networks.

[12]  Mikel L. Forcada,et al.  Learning the Initial State of a Second-Order Recurrent Neural Network during Regular-Language Inference , 1995, Neural Computation.

[13]  Jonathan E. Rowe,et al.  An Evolution Strategy Using a Continuous Version of the Gray-Code Neighbourhood Distribution , 2004, GECCO.

[14]  Lee A. Feldkamp,et al.  Recurrent network training with the decoupled-extended-Kalman-filter algorithm , 1992, Defense, Security, and Sensing.

[15]  C. Lee Giles,et al.  Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks , 1993 .

[16]  Xin Yao,et al.  Fast Evolution Strategies , 1997, Evolutionary Programming.

[17]  Kathryn B. Laskey,et al.  Neural Coding: Higher-Order Temporal Patterns in the Neurostatistics of Cell Assemblies , 2000, Neural Computation.

[18]  Anthony M. Zador,et al.  Binary Coding in Auditory Cortex , 2002, NIPS.

[19]  J. Csicsvari,et al.  Replay and Time Compression of Recurring Spike Sequences in the Hippocampus , 1999, The Journal of Neuroscience.

[20]  Henrik Jacobsson,et al.  Rule Extraction from Recurrent Neural Networks: ATaxonomy and Review , 2005, Neural Computation.

[21]  Henry Markram,et al.  Synapses as dynamic memory buffers , 2002, Neural Networks.

[22]  Dario Floreano,et al.  From Wheels to Wings with Evolutionary Spiking Circuits , 2003, Artificial Life.

[23]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[24]  T Natschläger,et al.  Spatial and temporal pattern analysis via spiking neurons. , 1998, Network.

[25]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[26]  C. Lee Giles,et al.  Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks , 1992, Neural Computation.

[27]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[28]  James L. McClelland,et al.  Finite State Automata and Simple Recurrent Networks , 1989, Neural Computation.

[29]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[30]  C. Lee Giles,et al.  Extraction of rules from discrete-time recurrent neural networks , 1996, Neural Networks.

[31]  Christopher J. Bishop,et al.  Pulsed Neural Networks , 1998 .

[32]  Peter Tiño,et al.  Learning and Extracting Initial Mealy Automata with a Modular Neural Network Model , 1995, Neural Comput..

[33]  Sander M. Bohte,et al.  Spiking Neural Networks , 2003 .

[34]  Sander M. Bohte,et al.  Error-backpropagation in temporally encoded networks of spiking neurons , 2000, Neurocomputing.

[35]  Hava T. Siegelmann,et al.  On the Computational Power of Neural Nets , 1995, J. Comput. Syst. Sci..

[36]  Sandiway Fong,et al.  Natural Language Grammatical Inference with Recurrent Neural Networks , 2000, IEEE Trans. Knowl. Data Eng..

[37]  Naftali Tishby,et al.  Cortical activity flips among quasi-stationary states. , 1995, Proceedings of the National Academy of Sciences of the United States of America.

[38]  Wolfgang Maass,et al.  Spiking neurons and the induction of finite state machines , 2002, Theor. Comput. Sci..

[39]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.