Learning, Storing, and Disentangling Correlated Patterns in Neural Networks

The brain encodes object relationship using correlated neural representations. Previous studies have revealed that it is a difficult task for neural networks to process correlated memory patterns; thus, strategies based on modified unsupervised Hebb rules have been proposed. Here, we explore a supervised strategy to learn correlated patterns in a recurrent neural network. We consider that a neural network not only learns to reconstruct a memory pattern, but also holds the pattern as an attractor long after the input cue is removed. Adopting backpropagation through time to train the network, we show that the network is able to store correlated patterns, and furthermore, when continuously morphed patterns are presented, the network acquires the structure of a continuous attractor neural network. By inducing spike frequency adaptation in the neural dynamics after training, we further demonstrate that the network has the capacities of anticipative tracking and disentangling superposed patterns. We hope that this study gives us insight into understanding how neural systems process correlated representations for objects.

[1]  Alessandro Treves,et al.  Uninformative memories will prevail: The storage of correlated representations and its consequences , 2007, HFSP journal.

[2]  David G. Stork,et al.  Book Review: "Introduction to the Theory of Neural Computation", John Hertz, Anders Krogh, and Richard G. Palmer , 1991, Int. J. Neural Syst..

[3]  D. Sagi,et al.  Dynamics of Memory Representations in Networks with Novelty-Facilitated Synaptic Plasticity , 2006, Neuron.

[4]  Si Wu,et al.  Continuous Attractor Neural Networks: Candidate of a Canonical Model for Neural Information Representation , 2016, F1000Research.

[5]  Si Wu,et al.  Anticipative tracking in two-dimensional continuous attractor neural networks , 2015, BMC Neuroscience.

[6]  Doris Y. Tsao,et al.  The Code for Facial Identity in the Primate Brain , 2017, Cell.

[7]  Jack L. Gallant,et al.  A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain , 2012, Neuron.

[8]  Si Wu,et al.  Spike Frequency Adaptation Implements Anticipative Tracking in Continuous Attractor Neural Networks , 2014, NIPS.

[9]  Si Wu,et al.  A Moving Bump in a Continuous Manifold: A Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks , 2008, Neural Computation.

[10]  C. Curtis,et al.  Persistent activity in the prefrontal cortex during working memory , 2003, Trends in Cognitive Sciences.

[11]  Hod Lipson,et al.  Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.

[12]  R. Shepard,et al.  Mental Rotation of Three-Dimensional Objects , 1971, Science.

[13]  Pascal Vincent,et al.  Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Razvan Pascanu,et al.  On the difficulty of training recurrent neural networks , 2012, ICML.

[15]  Boris S. Gutkin,et al.  Spike frequency adaptation , 2014, Scholarpedia.

[16]  Xiao Liu,et al.  Learning a Continuous Attractor Neural Network from Real Images , 2017, ICONIP.