On stability and associative recall of memories in attractor neural networks

Attractor neural networks such as the Hopfield model can be used to model associative memory. An efficient associative memory should be able to store a large number of patterns which must all be stable. We study in detail the meaning and definition of stability of network states. We reexamine the meanings of retrieval, recognition and recall and assign precise mathematical meanings to each of these terms. We also examine the relation between them and how they relate to memory capacity of the network. We have shown earlier in this journal that orthogonalization scheme provides an effective way of overcoming catastrophic interference that limits the memory capacity of the Hopfield model. It is not immediately apparent whether the improvement made by orthgonalization affects the processes of retrieval, recognition and recall equally. We show that this influence occurs to different degrees and hence affects the relations between them. We then show that the conditions for pattern stability can be split into a necessary condition (recognition) and a sufficient one (recall). We interpret in cognitive terms the information being stored in the Hopfield model and also after it is orthogonalized. We also study the alterations in the network dynamics of the Hopfield network upon the introduction of orthogonalization, and their effects on the efficiency of the network as an associative memory.

[1]  Morris Moscovitch,et al.  How we forget may depend on how we remember , 2014, Trends in Cognitive Sciences.

[2]  W. Donaldson,et al.  The role of decision processes in remembering and knowing , 1996, Memory & cognition.

[3]  Yaneer Bar-Yam,et al.  Dynamics Of Complex Systems , 2019 .

[4]  G. Mandler Recognizing: The judgment of previous occurrence. , 1980 .

[5]  James J. Knierim,et al.  Ensemble Dynamics of Hippocampal Regions CA3 and CA1 , 2004, Neuron.

[6]  L. Squire,et al.  The cognitive neuroscience of human memory since H.M. , 2011, Annual review of neuroscience.

[7]  Daniel J. Amit,et al.  Modeling brain function: the world of attractor neural networks, 1st Edition , 1989 .

[8]  Ulises Pereira,et al.  Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data , 2017, Neuron.

[9]  Sompolinsky,et al.  Spin-glass models of neural networks. , 1985, Physical review. A, General physics.

[10]  P. Peretto An introduction to the modeling of neural networks , 1992 .

[11]  Anders Lansner,et al.  Bistable, Irregular Firing and Population Oscillations in a Modular Attractor Memory Network , 2010, PLoS Comput. Biol..

[12]  Vipin Srivastava A unified view of the orthogonalization methods , 2000 .

[13]  Amos J. Storkey,et al.  The basins of attraction of a new Hopfield learning rule , 1999, Neural Networks.

[14]  James J. Knierim,et al.  CA3 Retrieves Coherent Representations from Degraded Input: Direct Evidence for CA3 Pattern Completion and Dentate Gyrus Pattern Separation , 2014, Neuron.

[15]  E. Robertson Memory instability as a gateway to generalization , 2018, PLoS biology.

[16]  R. Palmer,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[17]  Paul F. M. J. Verschure,et al.  A Signature of Attractor Dynamics in the CA3 Region of the Hippocampus , 2014, PLoS Comput. Biol..

[18]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[19]  F. Bartlett,et al.  Remembering: A Study in Experimental and Social Psychology , 1932 .

[20]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[21]  Roberto Prevete,et al.  Programming in the brain: a neural network theoretical framework , 2012, Connect. Sci..

[22]  Vipin Srivastava,et al.  Could the Brain Function Mathematically , 2018 .

[23]  Edmund T Rolls,et al.  An attractor network in the hippocampus: theory and neurophysiology. , 2007, Learning & memory.

[24]  L. Jacoby A process dissociation framework: Separating automatic from intentional uses of memory , 1991 .

[25]  Vipin Srivastava,et al.  Chapter Fourteen – Cognition of Learning and Memory: What Have Löwdin's Orthogonalizations Got to Do With That? , 2017 .

[26]  C. Stark,et al.  Pattern separation in the hippocampus , 2011, Trends in Neurosciences.

[27]  Vipin Srivastava,et al.  A model of how the brain discriminates and categorises , 2000 .

[28]  William E. Hockley,et al.  Familiarity and recollection in item and associative recognition , 1999, Memory & cognition.

[29]  Raúl Rojas,et al.  Neural Networks - A Systematic Introduction , 1996 .

[30]  Rafal Bogacz,et al.  Model of Familiarity Discrimination in the Perirhinal Cortex , 2004, Journal of Computational Neuroscience.

[31]  Randall D. Beer,et al.  On the Dynamics of Small Continuous-Time Recurrent Neural Networks , 1995, Adapt. Behav..

[32]  D J Parker,et al.  The nervous system might 'orthogonalize' to discriminate. , 2008, Journal of theoretical biology.

[33]  H. Eichenbaum,et al.  Consolidation and Reconsolidation: Two Lives of Memories? , 2011, Neuron.

[34]  Rafal Bogacz,et al.  Comparison of computational models of familiarity discrimination in the perirhinal cortex , 2003, Hippocampus.

[35]  Vipin Srivastava,et al.  Overcoming Catastrophic Interference in Connectionist Networks Using Gram-Schmidt Orthogonalization , 2014, PloS one.

[36]  Razvan Pascanu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[37]  Shun-ichi Amari,et al.  Statistical neurodynamics of associative memory , 1988, Neural Networks.

[38]  Nicolas Y. Masse,et al.  Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization , 2018, Proceedings of the National Academy of Sciences.