Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutr, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.

[1]  Sander M. Bohte,et al.  Spiking Neural Networks , 2003 .

[2]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[3]  Sander M. Bohte,et al.  Error-backpropagation in temporally encoded networks of spiking neurons , 2000, Neurocomputing.

[4]  Kathryn B. Laskey,et al.  Neural Coding: Higher-Order Temporal Patterns in the Neurostatistics of Cell Assemblies , 2000, Neural Computation.

[5]  Xin Yao,et al.  Evolving artificial neural networks , 1999, Proc. IEEE.

[6]  Hava T. Siegelmann,et al.  On the Computational Power of Neural Nets , 1995, J. Comput. Syst. Sci..

[7]  C. Lee Giles,et al.  Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks , 1992, Neural Computation.

[8]  Jonathan E. Rowe,et al.  An Evolution Strategy Using a Continuous Version of the Gray-Code Neighbourhood Distribution , 2004, GECCO.

[9]  Lee A. Feldkamp,et al.  Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks , 1994, IEEE Trans. Neural Networks.

[10]  Paul F. M. J. Verschure,et al.  Decoding a Temporal Population Code , 2004, Neural Computation.

[11]  Wulfram Gerstner,et al.  Spiking neurons , 1999 .

[12]  Mikel L. Forcada,et al.  Learning the Initial State of a Second-Order Recurrent Neural Network during Regular-Language Inference , 1995, Neural Computation.

[13]  B. Schrauwen,et al.  Extending SpikeProp , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[14]  Dario Floreano,et al.  Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots , 2001, EvoRobots.

[15]  Yoshua Bengio,et al.  The problem of learning long-term dependencies in recurrent networks , 1993, IEEE International Conference on Neural Networks.

[16]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[17]  T Natschläger,et al.  Spatial and temporal pattern analysis via spiking neurons. , 1998, Network.

[18]  C. Lee Giles,et al.  Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks , 1993 .

[19]  Henry Markram,et al.  Synapses as dynamic memory buffers , 2002, Neural Networks.

[20]  P. Frasconi,et al.  Representation of Finite State Automata in Recurrent Radial Basis Function Networks , 1996, Machine Learning.

[21]  Anthony M. Zador,et al.  Binary Coding in Auditory Cortex , 2002, NIPS.

[22]  Dario Floreano,et al.  From Wheels to Wings with Evolutionary Spiking Circuits , 2003, Artificial Life.

[23]  Wolfgang Maass,et al.  Lower Bounds for the Computational Power of Networks of Spiking Neurons , 1996, Neural Computation.

[24]  Mike Casey,et al.  The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction , 1996, Neural Computation.

[25]  James L. McClelland,et al.  Finite State Automata and Simple Recurrent Networks , 1989, Neural Computation.

[26]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[27]  Christopher J. Bishop,et al.  Pulsed Neural Networks , 1998 .

[28]  Peter Tiño,et al.  Learning and Extracting Initial Mealy Automata with a Modular Neural Network Model , 1995, Neural Comput..

[29]  Peter Tiňo,et al.  Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches , 1995 .

[30]  C. Lee Giles,et al.  Extraction of rules from discrete-time recurrent neural networks , 1996, Neural Networks.

[31]  Michael C. Mozer,et al.  Neural net architectures for temporal sequence processing , 2007 .

[32]  Sandiway Fong,et al.  Natural Language Grammatical Inference with Recurrent Neural Networks , 2000, IEEE Trans. Knowl. Data Eng..

[33]  Naftali Tishby,et al.  Cortical activity flips among quasi-stationary states. , 1995, Proceedings of the National Academy of Sciences of the United States of America.

[34]  Lee A. Feldkamp,et al.  Recurrent network training with the decoupled-extended-Kalman-filter algorithm , 1992, Defense, Security, and Sensing.

[35]  J. Csicsvari,et al.  Replay and Time Compression of Recurring Spike Sequences in the Hippocampus , 1999, The Journal of Neuroscience.

[36]  Wolfgang Maass,et al.  Spiking neurons and the induction of finite state machines , 2002, Theor. Comput. Sci..

[37]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[38]  W. Gerstner,et al.  Time structure of the activity in neural network models. , 1995, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[39]  Henrik Jacobsson,et al.  Rule extraction from recurrent neural networks , 2006 .

[40]  Xin Yao,et al.  Fast Evolution Strategies , 1997, Evolutionary Programming.