Artificial neural networks whispering to the brain: nonlinear system attractors induce familiarity with never seen items

Attractors of nonlinear neural systems are at the core of the memory self-refreshing mechanism of human memory models that suppose memories are dynamically maintained in a distributed network [Ans, B., and Rousset, S. (1997), ‘Avoiding Catastrophic Forgetting by Coupling Two Reverberating Neural Networks’ Comptes Rendus de l'Académie des Sciences Paris, Life Sciences, 320, 989–997; Ans, B., and Rousset, S. (2000), ‘Neural Networks with a Self-Refreshing Memory: Knowledge Transfer in Sequential Learning Tasks Without Catastrophic Forgetting’, Connection Science, 12, 1–19; Ans, B., Rousset, S., French, R.M., and Musca, S.C. (2002), ‘Preventing Catastrophic Interference in Multiple-Sequence Learning Using Coupled Reverberating Elman Networks’, in Proceedings of the 24th Annual Meeting of the Cognitive Science Society, eds. W.D. Gray and C.D. Schunn, Mahwah, NJ: Lawrence Erlbaum Associates, pp. 71–76; Ans, B., Rousset, S., French, R.M., and Musca, S.C. (2004), ‘Self-Refreshing Memory in Artificial Neural Networks: Learning Temporal Sequences Without Catastrophic Forgetting’, Connection Science, 16, 71–99; Ans, B. (2004), ‘Sequential Learning in Distributed Neural Networks Without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory can do it’, Neural Information Processing-Letters and Reviews, 4, 27–32]. Are humans able to learn never seen items from attractor patterns generated by a highly distributed artificial neural network? First, an opposition method was implemented to ensure that the attractors are not the items used to train the network, the source items: attractors were selected to be more similar (both at the exemplar and the centroïd level) to some control items than to the source items. In spite of this very severe selection, blank networks trained only on selected attractors performed better at test on the never seen source items than on the never seen control items. The results of two behavioural experiments using the opposition method show that humans exhibit more familiarity with the never seen source items than with the never seen control items, just as networks do. Thus, humans are sensitive to the particular type of information that allows distributed artificial neural networks to dynamically maintain their memory, and this information does not amount to the exemplars used to train the network that produced the attractors.

[1]  Chris I. Baker,et al.  Acquisition of Long-Term Visual Representations: Psychological and Neural Mechanisms , 2005 .

[2]  Serban C. Musca,et al.  Preventing Catastrophic Interference in Multiple-Sequence Learning Using Coupled Reverberating Elman Networks , 2019, Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society.

[3]  Stephan Lewandowsky ON THE RELATION BETWEEN CATASTROPHIC INTERFERENCE AND GENERALIZATION IN CONNECTIONIST NETWORKS , 1994 .

[4]  D. Witherspoon,et al.  The effect of a prior presentation on temporal judgments in a perceptual identification task , 1985, Memory & cognition.

[5]  Anthony V. Robins,et al.  Catastrophic Forgetting, Rehearsal and Pseudorehearsal , 1995, Connect. Sci..

[6]  Bernard Ans Sequential Learning in Distributed Neural Networks without Catastrophic Forgetting: A Single and Realistic Self-Refreshing Memory Can Do It , 2004 .

[7]  K. McRae,et al.  Catastrophic Interference is Eliminated in Pretrained Networks , 1993 .

[8]  Anthony V. Robins,et al.  The consolidation of learning during sleep: comparing the pseudorehearsal and unlearning accounts , 1999, Neural Networks.

[9]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..

[10]  D. Schacter Perceptual Representation Systems and Implicit Memory , 1990, Annals of the New York Academy of Sciences.

[11]  R Ratcliff,et al.  Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. , 1990, Psychological review.

[12]  R Ratcliff,et al.  Bias in the priming of object decisions. , 1995, Journal of experimental psychology. Learning, memory, and cognition.

[13]  James L. McClelland,et al.  Distributed memory and the representation of general and specific information. , 1985, Journal of experimental psychology. General.

[14]  E Tulving,et al.  Priming and human memory systems. , 1990, Science.

[15]  Stephen Grossberg,et al.  Competitive Learning: From Interactive Activation to Adaptive Resonance , 1987, Cogn. Sci..

[16]  Pierre Poirier,et al.  Atomistic learning in non-modular systems , 2005 .

[17]  Noel E. Sharkey,et al.  An Analysis of Catastrophic Interference , 1995, Connect. Sci..

[18]  James L. McClelland,et al.  Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. , 1995, Psychological review.

[19]  L. Jacoby Perceptual enhancement: persistent effects of an experience. , 1983, Journal of experimental psychology. Learning, memory, and cognition.

[20]  Bernard Ans,et al.  Neural networks with a self-refreshing memory: Knowledge transfer in sequential learning tasks without catastrophic forgetting , 2000, Connect. Sci..

[21]  R. French Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference , 2019, Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society.

[22]  Robert M. French,et al.  Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks , 1992 .

[23]  L’oubli catastrophique it,et al.  Avoiding catastrophic forgetting by coupling two reverberating neural networks , 2004 .

[24]  Anthony V. Robins,et al.  Consolidation in Neural Networks and in the Sleeping Brain , 1996, Connect. Sci..

[25]  G. Mandler Recognizing: The judgment of previous occurrence. , 1980 .

[26]  Robert M. French,et al.  Pseudopatterns and dual-network memory models: Advantages and shortcomings , 2000, NCPW.

[27]  R. French Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.

[28]  Anthony V. Robins,et al.  Catastrophic Forgetting and the Pseudorehearsal Solution in Hopfield-type Networks , 1998, Connect. Sci..

[29]  Robert M. French,et al.  Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting , 2004, Connect. Sci..

[30]  Larry L. Jacoby,et al.  Illusions of immediate memory: evidence of an attributional basis for feelings of familiarity and perceptual quality , 1990 .

[31]  R. Nosofsky Similarity Scaling and Cognitive Process Models , 1992 .

[32]  Michael McCloskey,et al.  Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .

[33]  L. Jacoby A process dissociation framework: Separating automatic from intentional uses of memory , 1991 .

[34]  S. Lewandowsky,et al.  Catastrophic interference in neural networks , 1995 .

[35]  E. Capaldi,et al.  The organization of behavior. , 1992, Journal of applied behavior analysis.

[36]  Stephen Grossberg,et al.  The ART of adaptive pattern recognition by a self-organizing neural network , 1988, Computer.

[37]  Robert M. French,et al.  Pseudo-recurrent Connectionist Networks: An Approach to the 'Sensitivity-Stability' Dilemma , 1997, Connect. Sci..

[38]  A. Yonelinas The Nature of Recollection and Familiarity: A Review of 30 Years of Research , 2002 .