Slow stochastic Hebbian learning of classes of stimuli in a recurrent neural network.

We study unsupervised Hebbian learning in a recurrent network in which synapses have a finite number of stable states. Stimuli received by the network are drawn at random at each presentation from a set of classes. Each class is defined as a cluster in stimulus space, centred on the class prototype. The presentation protocol is chosen to mimic the protocols of visual memory experiments in which a set of stimuli is presented repeatedly in a random way. The statistics of the input stream may be stationary, or changing. Each stimulus induces, in a stochastic way, transitions between stable synaptic states. Learning dynamics is studied analytically in the slow learning limit, in which a given stimulus has to be presented many times before it is memorized, i.e. before synaptic modifications enable a pattern of activity correlated with the stimulus to become an attractor of the recurrent network. We show that in this limit the synaptic matrix becomes more correlated with the class prototypes than with any of the instances of the class. We also show that the number of classes that can be learned increases sharply when the coding level decreases, and determine the speeds of learning and forgetting of classes in the case of changes in the statistics of the input stream.

[1]  Stefano Fusi,et al.  Prototype extraction in material attractor neural networks with stochastic dynamic learning , 1995, SPIE Defense + Commercial Sensing.

[2]  N Brunel,et al.  Correlations of cortical Hebbian reverberations: theory versus experiment , 1994, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[3]  Jean-Pierre Nadal,et al.  Sparsely coded neural networks , 1998 .

[4]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[5]  Daniel J. Amit,et al.  Paradigmatic Working Memory (Attractor) Cell in IT Cortex , 1997, Neural Computation.

[6]  Feller William,et al.  An Introduction To Probability Theory And Its Applications , 1950 .

[7]  Nicolas Brunel,et al.  Hebbian Learning of Context in Recurrent Neural Networks , 1996, Neural Computation.

[8]  Nicolas Brunel,et al.  Dynamics of an attractor neural network converting temporal into spatial correlations Network: Compu , 1994 .

[9]  Davide Badoni,et al.  Electronic implementation of an analogue attractor neural network with stochastic learning , 1995 .

[10]  M. Tsodyks,et al.  The Enhanced Storage Capacity in Neural Networks with Low Activity Level , 1988 .

[11]  S. Shinomoto Memory maintenance in neural networks , 1987 .

[12]  J. Fuster Memory in the cerebral cortex , 1994 .

[13]  Heskes,et al.  Learning processes in neural networks. , 1991, Physical review. A, Atomic, molecular, and optical physics.

[14]  G. Parisi A memory which forgets , 1986 .

[15]  J. Amit,et al.  Correlations of cortical Hebbian reverberations : experiment and theoryDaniel , 1994 .

[16]  E. Rolls Principles underlying the representation and storage of information in neuronal networks in the primate hippocampus and cerebral cortex , 1990 .

[17]  Jean-Pierre Nadal,et al.  Information storage in sparsely coded memory nets , 1990 .

[18]  Y. Miyashita Inferior temporal cortex: where visual perception meets memory. , 1993, Annual review of neuroscience.

[19]  P. Gean,et al.  Long-Term Depression of Excitatory Synaptic Transmission in the Rat Amygdala , 1999, The Journal of Neuroscience.

[20]  W. Singer,et al.  Long-term depression of excitatory synaptic transmission and its relationship to long-term potentiation , 1993, Trends in Neurosciences.

[21]  Stanislas Dehaene,et al.  Networks of Formal Neurons and Memory Palimpsests , 1986 .

[22]  D. Amit,et al.  Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. , 1997, Cerebral cortex.

[23]  H. C. LONGUET-HIGGINS,et al.  Non-Holographic Associative Memory , 1969, Nature.

[24]  T. Sejnowski,et al.  Storing covariance with nonlinearly interacting neurons , 1977, Journal of mathematical biology.

[25]  F. Attneave,et al.  The Organization of Behavior: A Neuropsychological Theory , 1949 .

[26]  D Sherrington,et al.  A neural network model of working memory exhibiting primacy and recency , 1991 .

[27]  E. Bienenstock,et al.  Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex , 1982, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[28]  Ludovic Jullien,et al.  Imaging vesicle adhesion by Evanescent Wave-Induced Fluorescence , 1999 .

[29]  Y. Miyashita Neuronal correlate of visual associative long-term memory in the primate temporal cortex , 1988, Nature.

[30]  D Marr,et al.  Simple memory: a theory for archicortex. , 1971, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[31]  T. Bliss,et al.  A synaptic model of memory: long-term potentiation in the hippocampus , 1993, Nature.

[32]  Nicolas Brunel,et al.  Learning internal representations in an attractor neural network with analogue neurons , 1995 .

[33]  D. Amit,et al.  Constraints on learning in dynamic synapses , 1992 .

[34]  M. Tsodyks ASSOCIATIVE MEMORY IN NEURAL NETWORKS WITH BINARY SYNAPSES , 1990 .

[35]  W. Abraham,et al.  Flip side of synaptic plasticity: Long‐term depression mechanisms in the hippocampus , 1994, Hippocampus.

[36]  Daniel J. Amit,et al.  Learning in Neural Networks with Material Synapses , 1994, Neural Computation.