Stability and Topology in Reservoir Computing

Recently Jaeger and others have put forth the paradigm of "reservoir computing" as a way of computing with highly recurrent neural networks. This reservoir is a collection of neurons randomly connected with each other of fixed weights. Amongst other things, it has been shown to be effective in temporal pattern recognition; and has been held as a model appropriate to explain how certain aspects of the brain work. (Particularly in its guise as "liquid state machine", due to Maass et al.) In this work we show that although it is known that this model does have generalizability properties and thus is robust to errors in input, it is NOT resistant to errors in the model itself. Thus small malfunctions or distortions make previous training ineffective. Thus this model as currently presented cannot be thought of as appropriate as a biological model; and it also suggests limitations on the applicability in the pattern recognition sphere. However, we show that, with the enforcement of topological constraints on the reservoir, in particular that of small world topology, the model is indeed fault tolerant. Thus this implies that "natural" computational systems must have specific topologies and the uniform random connectivity is not appropriate.

[1]  Wolfgang Banzhaf,et al.  Advances in Artificial Life , 2003, Lecture Notes in Computer Science.

[2]  Jianfeng Feng,et al.  Computational neuroscience , 1986, Behavioral and Brain Sciences.

[3]  Albert,et al.  Emergence of scaling in random networks , 1999, Science.

[4]  Herbert Jaeger,et al.  The''echo state''approach to analysing and training recurrent neural networks , 2001 .

[5]  G. B. A. Barab'asi Competition and multiscaling in evolving networks , 2000, cond-mat/0011029.

[6]  Lav R. Varshney,et al.  Structural Properties of the Caenorhabditis elegans Neuronal Network , 2009, PLoS Comput. Biol..

[7]  H. Sompolinsky,et al.  The tempotron: a neuron that learns spike timing–based decisions , 2006, Nature Neuroscience.

[8]  Henry Markram,et al.  Computational models for generic cortical microcircuits , 2004 .

[9]  Wolfgang Maass,et al.  Paradigms for Computing with Spiking Neurons , 2002 .

[10]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[11]  Chrisantha Fernando,et al.  Pattern Recognition in a Bucket , 2003, ECAL.

[12]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[13]  Danielle Smith Bassett,et al.  Small-World Brain Networks , 2006, The Neuroscientist : a review journal bringing neurobiology, neurology and psychiatry.

[14]  Shimon Marom,et al.  Modeling the process of rate selection in neuronal activity. , 2002, Journal of theoretical biology.

[15]  Albert,et al.  Topology of evolving networks: local events and universality , 2000, Physical review letters.

[16]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[17]  Bernard Widrow,et al.  Adaptive switching circuits , 1988 .

[18]  Eugene M. Izhikevich,et al.  Simple model of spiking neurons , 2003, IEEE Trans. Neural Networks.