Learning a Continuous Attractor Neural Network from Real Images

Continuous attractor neural networks (CANNs) have been widely used as a canonical model for neural information representation. It remains, however, unclear how the neural system acquires such a network structure in practice. In the present study, we propose a biological plausible scheme for the neural system to learn a CANN from real images. The scheme contains two key issues. One is to generate high-level representations of objects, such that the correlation between neural representations reflects the sematic relationship between objects. We adopt a deep neural network trained by a large number of natural images to achieve this goal. The other is to learn correlated memory patterns in a recurrent neural network. We adopt a modified Hebb rule, which encodes the correlation between neural representations into the connection form of the network. We carry out a number of experiments to demonstrate that when the presented images are linked by a continuous feature, the neural system learns a CANN successfully, in term of that these images are stored as a continuous family of stationary states of the network, forming a sub-manifold of low energy in the network state space.

[1]  Si Wu,et al.  Spike Frequency Adaptation Implements Anticipative Tracking in Continuous Attractor Neural Networks , 2014, NIPS.

[2]  D. Sagi,et al.  Dynamics of Memory Representations in Networks with Novelty-Facilitated Synaptic Plasticity , 2006, Neuron.

[3]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[4]  A. Georgopoulos,et al.  Cognitive neurophysiology of the motor cortex. , 1993, Science.

[5]  Johannes D. Seelig,et al.  Neural dynamics for landmark orientation and angular path integration , 2015, Nature.

[6]  C. Barry,et al.  Specific evidence of low-dimensional continuous attractor dynamics in grid cells , 2013, Nature Neuroscience.

[7]  M. Carandini,et al.  Normalization as a canonical neural computation , 2011, Nature Reviews Neuroscience.

[8]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[9]  Vipin Srivastava,et al.  Overcoming Catastrophic Interference in Connectionist Networks Using Gram-Schmidt Orthogonalization , 2014, PloS one.

[10]  W. Newsome,et al.  Context-dependent computation by recurrent dynamics in prefrontal cortex , 2013, Nature.

[11]  V. Jayaraman,et al.  Ring attractor dynamics in the Drosophila central brain , 2017, Science.

[12]  Neil Burgess,et al.  Attractor Dynamics in the Hippocampal Representation of the Local Environment , 2005, Science.

[13]  Alessandro Treves,et al.  Uninformative memories will prevail: The storage of correlated representations and its consequences , 2007, HFSP journal.

[14]  Si Wu,et al.  Continuous Attractor Neural Networks: Candidate of a Canonical Model for Neural Information Representation , 2016, F1000Research.

[15]  James L. McClelland,et al.  What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated , 2016, Trends in Cognitive Sciences.

[16]  Ha Hong,et al.  Performance-optimized hierarchical models predict neural responses in higher visual cortex , 2014, Proceedings of the National Academy of Sciences.

[17]  S. Amari Dynamics of pattern formation in lateral-inhibition type neural fields , 1977, Biological Cybernetics.

[18]  K. Zhang,et al.  Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory , 1996, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[19]  Bruce L. McNaughton,et al.  Progressive Transformation of Hippocampal Neuronal Representations in “Morphed” Environments , 2005, Neuron.

[20]  S. Morad,et al.  Ceramide-orchestrated signalling in cancer cells , 2012, Nature Reviews Cancer.