Stable Memory Allocation in the Hippocampus: Fundamental Limits and Neural Realization

It is believed that hippocampus functions as a memory allocator in brain, the mechanism of which remains unrevealed. In Valiant's neuroidal model, the hippocampus was described as a randomly connected graph, the computation on which maps input to a set of activated neuroids with stable size. Valiant proposed three requirements for the hippocampal circuit to become a stable memory allocator (SMA): stability, continuity and orthogonality. The functionality of SMA in hippocampus is essential in further computation within cortex, according to Valiant's model. In this paper, we put these requirements for memorization functions into rigorous mathematical formulation and introduce the concept of capacity, based on the probability of erroneous allocation. We prove fundamental limits for the capacity and error probability of SMA, in both data-independent and data-dependent settings. We also establish an example of stable memory allocator that can be implemented via neuroidal circuits. Both theoretical bounds and simulation results show that the neural SMA functions well.

[1]  L. Valiant What must a global theory of cortex explain? , 2014, Current Opinion in Neurobiology.

[2]  Leslie G. Valiant,et al.  The Hippocampus as a Stable Memory Allocator for Cortex , 2012, Neural Computation.

[3]  André Longtin,et al.  Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays , 2014, Front. Comput. Neurosci..

[4]  Leslie G. Valiant,et al.  Memorization and Association on a Realistic Neural Model , 2005, Neural Computation.

[5]  Constantine Caramanis,et al.  Binary Embedding: Fundamental Limits and Fast Algorithm , 2015, ICML.

[6]  Leslie G. Valiant,et al.  Experience-Induced Neural Circuits That Achieve High Capacity , 2009, Neural Computation.

[7]  Santosh S. Vempala,et al.  Cortical Computation , 2015, PODC.

[8]  Leslie G. Valiant,et al.  Circuits of the mind , 1994 .

[9]  Santosh S. Vempala,et al.  Cortical Learning via Prediction , 2015, COLT.

[10]  Brent Doiron,et al.  Subtractive and Divisive Inhibition: Effect of Voltage-Dependent Inhibitory Conductances and Noise , 2001, Neural Computation.

[11]  D. Debanne,et al.  Heterogeneity of Synaptic Plasticity at Unitary CA3–CA1 and CA3–CA3 Connections in Rat Hippocampal Slice Cultures , 1999, The Journal of Neuroscience.

[12]  Santosh S. Vempala,et al.  An algorithmic theory of learning: Robust concepts and random projection , 1999, Machine Learning.

[13]  T. Bliss,et al.  The Hippocampus Book , 2006 .

[14]  John J. Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities , 1999 .

[15]  Leslie G. Valiant,et al.  A neuroidal architecture for cognitive computation , 1998, ICALP.

[16]  Piotr Indyk,et al.  Approximate nearest neighbors: towards removing the curse of dimensionality , 1998, STOC '98.

[17]  David P. Woodruff,et al.  Optimal Bounds for Johnson-Lindenstrauss Transforms and Streaming Problems with Subconstant Error , 2011, TALG.

[18]  Vasek Chvátal,et al.  The tail of the hypergeometric distribution , 1979, Discret. Math..

[19]  M. D. Kirszbraun Über die zusammenziehende und Lipschitzsche Transformationen , 1934 .

[20]  Leslie G. Valiant,et al.  A Quantitative Theory of Neural Computation , 2006, Biological Cybernetics.

[21]  Nir Shavit,et al.  Sparse sign-consistent Johnson–Lindenstrauss matrices: Compression with neuroscience-based constraints , 2014, Proceedings of the National Academy of Sciences.