Giving meaning to cycles to go beyond the limitations of fixed point attractors

This chapter focuses on associativememories in recurrent artificial neural networks using the same kind of very simple neurons usually found in neural nets. The past 25 years have dedicated much of the research endeavor on coding the information in fixed point attractors. From a cognitive or neurophysiological point of view, this choice is rather arbitrary. This paper justifies the need to switch to another encoding mechanism exploiting limit cycles and complex dynamics in the background rather than fixed points. It is shown how these attractors encompass in many aspects the limitations of fixed points: better correspondence with neurophysiological facts, increase of the encoding capacity, improved robustness during the retrieval phase, decrease in the number of spurious attractors. However, how to exploit and learn these cycles for encoding the relevant information is still an open issue. In this paper two learning strategies are proposed, tested and compared, one rather classical, very reminiscent of the usual supervised hebbian learning, the other one, rather original since allowing the coding attractor to be chosen by the network itself. Computer experiments of these two learning strategies will be presented and explained. The second learning mechanism will be advocated both for its highly cognitive relevance and on account of its much better performance in encoding and retrieving the information. Since the kind of dynamics observed in our experiments (cyclic attractor when a stimulus is presented and a weak background chaos in the absence of such stimulation) faithfully reminds neurophysiological data and although no straightforward applications have been found so far, we limit our justification to this qualitative mapping with brain observations and the need to better explore how a physical device such as a brain can store and retrieve in a robust way a huge quantity of information.

[1]  G. A. Miller THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .

[2]  Shun-ichi Amari,et al.  Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements , 1972, IEEE Transactions on Computers.

[3]  Alexander H. Waibel,et al.  Modular Construction of Time-Delay Neural Networks for Speech Recognition , 1989, Neural Computation.

[4]  Y. Pomeau,et al.  Intermittent transition to turbulence in dissipative dynamical systems , 1980 .

[5]  N Brunel,et al.  Slow stochastic Hebbian learning of classes of stimuli in a recurrent neural network. , 1998, Network.

[6]  G. Bi,et al.  Distributed synaptic modification in neural networks induced by patterned stimulation , 1999, Nature.

[7]  Ichiro Tsuda,et al.  Towards an interpretation of dynamic neural activity in terms of chaotic dynamical systems , 2000 .

[8]  Hugues Bersini,et al.  How chaos in small hopfield networks makes sense of the world , 2003 .

[9]  Hugues Bersini,et al.  How to prevent spurious data in a chaotic brain , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[10]  E. Gardner,et al.  Maximum Storage Capacity in Neural Networks , 1987 .

[11]  Shun-ichi Amari,et al.  Statistical neurodynamics of associative memory , 1988, Neural Networks.

[12]  Robert Kozma,et al.  On the constructive role of noise in stabilizing itinerant trajectories in chaotic dynamical systems. , 2003, Chaos.

[13]  Daniel J. Amit,et al.  Learning in Neural Networks with Material Synapses , 1994, Neural Computation.

[14]  Teuvo Kohonen,et al.  Self-organized formation of topologically correct feature maps , 2004, Biological Cybernetics.

[15]  Bruno Cessac,et al.  Self-organization and dynamics reduction in recurrent networks: stimulus presentation and learning , 1998, Neural Networks.

[16]  Stefano Fusi,et al.  Hebbian spike-driven synaptic plasticity for learning patterns of mean firing rates , 2002, Biological Cybernetics.

[17]  E. Rosch,et al.  The Embodied Mind: Cognitive Science and Human Experience , 1993 .

[18]  J E Lisman,et al.  Storage of 7 +/- 2 short-term memories in oscillatory subcycles , 1995, Science.

[19]  D. J. Wallace,et al.  Storage capacity and learning in Ising-Spin neural networks , 1991 .

[20]  A. Wolf,et al.  Determining Lyapunov exponents from a time series , 1985 .

[21]  Frank Moss,et al.  Use of behavioural stochastic resonance by paddle fish for feeding , 1999, Nature.

[22]  T. Bliss,et al.  Long‐lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path , 1973, The Journal of physiology.

[23]  F. Varela,et al.  Perception's shadow: long-distance synchronization of human brain activity , 1999, Nature.

[24]  W. Freeman,et al.  Taming chaos: stabilization of aperiodic attractors by noise [olfactory system model] , 1997 .

[25]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[26]  Nicolas Brunel,et al.  Learning internal representations in an attractor neural network with analogue neurons , 1995 .

[27]  Hugues Bersini The frustrated and compositional nature of chaos in small Hopfield networks , 1998, Neural Networks.

[28]  W. Levy,et al.  Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus , 1983, Neuroscience.

[29]  P. Érdi,et al.  The brain as a hermeneutic device. , 1996, Bio Systems.

[30]  G. Buzsáki Rhythms of the brain , 2006 .

[31]  D. O. Hebb,et al.  The organization of behavior , 1988 .

[32]  Wolf Singer,et al.  Neuronal Synchrony: A Versatile Code for the Definition of Relations? , 1999, Neuron.

[33]  A. Grinvald,et al.  Spontaneously emerging cortical representations of visual attributes , 2003, Nature.

[34]  Hugues Bersini,et al.  Phase synchronization and chaotic dynamics in Hebbian learned artificial recurrent neural networks , 2005 .

[35]  Hugues Bersini,et al.  The Road to Chaos by Time-Asymmetric Hebbian Learning in Recurrent Neural Networks , 2007, Neural Computation.

[36]  I. Tsuda Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. , 2001, The Behavioral and brain sciences.

[37]  Francisco Rodríguez-Quioñes Spanish researchers defeated by the system , 1999, Nature.

[38]  A Babloyantz,et al.  Computation with chaos: a paradigm for cortical activity. , 1994, Proceedings of the National Academy of Sciences of the United States of America.

[39]  D. Ruelle,et al.  Ergodic theory of chaos and strange attractors , 1985 .

[40]  Daniel J. Amit,et al.  Spike-Driven Synaptic Dynamics Generating Working Memory States , 2003, Neural Computation.

[41]  Hugues Bersini,et al.  Learning Cycles brings Chaos in Continuous Hopfield Networks , 2005 .

[42]  S.-I. Amari,et al.  Neural theory of association and concept-formation , 1977, Biological Cybernetics.

[43]  G. Buzsáki,et al.  Gamma (40-100 Hz) oscillation in the hippocampus of the behaving rat , 1995, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[44]  D. Amit,et al.  Statistical mechanics of neural networks near saturation , 1987 .

[45]  C. Molter,et al.  Introduction of a Hebbian unsupervised learning algorithm to boost the encoding capacity of Hopfield networks , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[46]  W. Freeman,et al.  How brains make chaos in order to make sense of the world , 1987, Behavioral and Brain Sciences.

[47]  J. Piaget The Psychology Of Intelligence , 1951 .

[48]  E. Gardner,et al.  Three unfinished works on the optimal storage capacity of networks , 1989 .

[49]  Eytan Domany,et al.  Models of Neural Networks I , 1991 .

[50]  D. Amit The Hebbian paradigm reintegrated: Local reverberations as internal representations , 1995, Behavioral and Brain Sciences.

[51]  O. Rössler The Chaotic Hierarchy , 1983 .

[52]  J. Leo van Hemmen,et al.  Collective phenomena in neural networks , 1991 .

[53]  Ichiro Tsuda,et al.  Chaotic dynamics of information processing: The “magic number seven plus-minus two” revisited , 1985 .

[54]  S. Grossberg Neural Networks and Natural Intelligence , 1988 .