Over-complete representations on recurrent neural networks can support persistent percepts

A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy. Cortical neurons are then connected by a sparse network of lateral synapses. Here we propose that such architecture may increase the persistence of the representation of an incoming stimulus, or a percept. We demonstrate that for a family of networks in which the receptive field of each neuron is re-expressed by its outgoing connections, a represented percept can remain constant despite changing activity. We term this choice of connectivity REceptive FIeld REcombination (REFIRE) networks. The sparse REFIRE network may serve as a high-dimensional integrator and a biologically plausible model of the local cortical circuit.

[1]  Stephen C. Cannon,et al.  A proposed neural network for the integrator of the oculomotor system , 1983, Biological Cybernetics.

[2]  Stéphane Mallat,et al.  Image modeling and enhancement via structured sparse model selection , 2010, 2010 IEEE International Conference on Image Processing.

[3]  Aapo Hyvärinen,et al.  Natural Image Statistics - A Probabilistic Approach to Early Computational Vision , 2009, Computational Imaging and Vision.

[4]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[5]  Prof. Dr. Dr. Valentino Braitenberg,et al.  Cortex: Statistics and Geometry of Neuronal Connectivity , 1998, Springer Berlin Heidelberg.

[6]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[7]  Sen Song,et al.  Highly Nonrandom Features of Synaptic Connectivity in Local Cortical Circuits , 2005, PLoS biology.

[8]  H S Seung,et al.  How the brain keeps the eyes still. , 1996, Proceedings of the National Academy of Sciences of the United States of America.

[9]  Julien Mairal,et al.  Proximal Methods for Sparse Hierarchical Dictionary Learning , 2010, ICML.

[10]  Richard G. Baraniuk,et al.  Sparse Coding via Thresholding and Local Competition in Neural Circuits , 2008, Neural Computation.

[11]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[12]  Martin Rehn,et al.  A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields , 2007, Journal of Computational Neuroscience.

[13]  Shuicheng Yan,et al.  Learning With $\ell ^{1}$-Graph for Image Analysis , 2010, IEEE Transactions on Image Processing.

[14]  Lav R. Varshney,et al.  Optimal Information Storage in Noisy Synapses under Resource Constraints , 2006, Neuron.

[15]  Bruno A. Olshausen,et al.  Learning Horizontal Connections in a Sparse Coding Model of Natural Images , 2007, NIPS.

[16]  Daniel D. Lee,et al.  An Information Maximization Approach to Overcomplete and Recurrent Representations , 2000, NIPS.

[17]  O. Christensen An introduction to frames and Riesz bases , 2002 .

[18]  Guillermo Sapiro,et al.  Online Learning for Matrix Factorization and Sparse Coding , 2009, J. Mach. Learn. Res..

[19]  Jelena Kovacevic,et al.  An Introduction to Frames , 2008, Found. Trends Signal Process..

[20]  R. Fergus,et al.  Learning invariant features through topographic filter maps , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[21]  D. Chklovskii,et al.  Maps in the brain: what can we learn from them? , 2004, Annual review of neuroscience.

[22]  Yair Weiss,et al.  The 'tree-dependent components' of natural scenes are edge filters , 2009, NIPS.

[23]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[24]  Ehsan Elhamifar,et al.  Sparse subspace clustering , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[25]  D. Tank,et al.  Persistent neural activity: prevalence and mechanisms , 2004, Current Opinion in Neurobiology.

[26]  K. Harris,et al.  Ultrastructural Analysis of Hippocampal Neuropil from the Connectomics Perspective , 2010, Neuron.