Storing Sparse Messages in Networks of Neural Cliques

An extension to a recently introduced binary neural network is proposed to allow the storage of sparse messages, in large numbers and with high memory efficiency. This new network is justified both in biological and informational terms. The storage and retrieval rules are detailed and illustrated by various simulation results.

[1]  Bruno A Olshausen,et al.  Sparse coding of sensory inputs , 2004, Current Opinion in Neurobiology.

[2]  René Vidal,et al.  Robust classification using structured sparse representation , 2011, CVPR 2011.

[3]  Peter Földiák,et al.  SPARSE CODING IN THE PRIMATE CORTEX , 2002 .

[4]  Ronald A. Rensink The Dynamic Representation of Scenes , 2000 .

[5]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[6]  Mark D. Plumbley,et al.  Fast Dictionary Learning for Sparse Representations of Speech Signals , 2011, IEEE Journal of Selected Topics in Signal Processing.

[7]  Martin J. Wainwright,et al.  Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting , 2007, IEEE Transactions on Information Theory.

[8]  Charles A. Perfetti,et al.  The Brain Might Read That Way , 2004, The Cognitive Neuroscience of Reading.

[9]  J L Gallant,et al.  Sparse coding and decorrelation in primary visual cortex during natural vision. , 2000, Science.

[10]  W S McCulloch,et al.  A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.

[11]  Matthias Löwe On the storage capacity of Hopfield models with correlated patterns , 1998 .

[12]  Shy Shoham,et al.  Identification of network-level coding units for real-time representation of episodic experiences in the hippocampus , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[13]  S. Buldyrev,et al.  A statistically based density map method for identification and quantification of regional differences in microcolumnarity in the monkey brain , 2005, Journal of Neuroscience Methods.

[14]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[15]  Anders Lansner,et al.  Non-commercial Research and Educational Use including without Limitation Use in Instruction at Your Institution, Sending It to Specific Colleagues That You Know, and Providing a Copy to Your Institution's Administrator. All Other Uses, Reproduction and Distribution, including without Limitation Comm , 2022 .

[16]  J. Grainger,et al.  Does the huamn mnid raed wrods as a wlohe? , 2004, Trends in Cognitive Sciences.

[17]  Vahid Tarokh,et al.  Shannon-Theoretic Limits on Noisy Compressive Sampling , 2007, IEEE Transactions on Information Theory.

[18]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[19]  Michael Elad,et al.  Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation , 2010, IEEE Transactions on Signal Processing.

[20]  Olaf Sporns,et al.  The small world of the cerebral cortex , 2007, Neuroinformatics.

[21]  Vincent Gripon,et al.  Sparse Neural Networks With Large Learning Diversity , 2011, IEEE Transactions on Neural Networks.

[22]  V. Mountcastle The columnar organization of the neocortex. , 1997, Brain : a journal of neurology.

[23]  Venkatesh Saligrama,et al.  Information Theoretic Bounds for Compressed Sensing , 2008, IEEE Transactions on Information Theory.

[24]  Xiaogang Wang,et al.  Image Transformation Based on Learning Dictionaries across Image Spaces , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  Claude Berrou,et al.  Learning long sequences in binary neural networks , 2012 .

[26]  E. G. Jones,et al.  Microcolumns in the cerebral cortex. , 2000, Proceedings of the National Academy of Sciences of the United States of America.

[27]  Joe Z. Tsien,et al.  The organizing principles of real-time memory encoding: Neural clique assemblies and universal neural codes , 2007, Neuroscience Research.

[28]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[29]  Vincent Gripon,et al.  Nearly-optimal associative memories based on distributed constant weight codes , 2012, 2012 Information Theory and Applications Workshop.

[30]  Rémi Munos,et al.  Compressed Least-Squares Regression , 2009, NIPS.

[31]  Robert G. Gallager,et al.  Low-density parity-check codes , 1962, IRE Trans. Inf. Theory.

[32]  Alain Glavieux,et al.  Reflections on the Prize Paper : "Near optimum error-correcting coding and decoding: turbo codes" , 1998 .