Quantifying the Effect of Learning on Recurrent Spikin Neurons

This work is concerned with measuring what is the response of recurrent spiking neurons when a learning rule is applied to them, in a liquid state machine context. Two indicators are considered for monitoring on-line the effect of learning: the separation property, which has already been studied in previous works, and an incremental version of the statistical complexity measure that is introduced expressly for our needs. It is found that while separation increases, a neuron's average statistical complexity decreases when a learning rule is applied. This means that neurons become more predictable and their behavior is simplified as an effect of learning. A key feature of this work is to provide a quantification of this phenomenon.

[1]  Cosma Rohilla Shalizi,et al.  Discovering Functional Communities in Dynamical Networks , 2006, SNA@ICML.

[2]  Cosma Rohilla Shalizi,et al.  Methods and Techniques of Complex Systems Science: An Overview , 2003, nlin/0307015.

[3]  Nicolas Brodu,et al.  Practical investigations of complex systems , 2007 .

[4]  Henry Markram,et al.  Computational models for generic cortical microcircuits , 2004 .

[5]  Terrence J. Sejnowski,et al.  What Makes a Dynamical System Computationally Powerful , 2007 .

[6]  Lutz Prechelt,et al.  PROBEN 1 - a set of benchmarks and benchmarking rules for neural network training algorithms , 1994 .

[7]  L. Abbott,et al.  Competitive Hebbian learning through spike-timing-dependent synaptic plasticity , 2000, Nature Neuroscience.

[8]  C. Moore,et al.  Automatic filters for the detection of coherent structure in spatiotemporal systems. , 2005, Physical review. E, Statistical, nonlinear, and soft matter physics.

[9]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[10]  C. Shalizi,et al.  Causal architecture, complexity and self-organization in time series and cellular automata , 2001 .

[11]  W. Maass,et al.  What makes a dynamical system computationally powerful ? , 2022 .

[12]  Dan Ventura,et al.  Preparing More Effective Liquid State Machines Using Hebbian Learning , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[13]  Dan Ventura,et al.  Spatiotemporal Pattern Recognition via Liquid State Machines , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[14]  Nicolas Brodu Learning using Dynamical Regime Identification and Synchronization , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[15]  Cosma Rohilla Shalizi,et al.  Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences , 2004, UAI.