Assemblies of neurons can learn to classify well-separated distributions

An assembly is a large population of neurons whose synchronous firing is hypothesized to represent a memory, concept, word, and other cognitive categories. Assemblies are believed to provide a bridge between high-level cognitive phenomena and low-level neural activity. Recently, a computational system called the Assembly Calculus (AC), with a repertoire of biologically plausible operations on assemblies, has been shown capable of simulating arbitrary space-bounded computation, but also of simulating complex cognitive phenomena such as language, reasoning, and planning. However, the mechanism whereby assemblies can mediate learning has not been known. Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class. Furthermore, such class assemblies will be distinguishable as long as the respective classes are reasonably separated — for example, when they are clusters of similar assemblies, or more generally separable with margin by a linear threshold function. To prove these results, we draw on random graph theory with dynamic edge weights to estimate sequences of activated vertices, yielding strong generalizations of previous calculations and theorems in this field over the past five years. These theorems are backed up by experiments demonstrating the successful formation of assemblies which represent concept classes on synthetic data drawn from such distributions, and also on MNIST, which lends itself to classification through one assembly per digit. Seen as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision — all key attributes of learning in a model of the brain. We argue that this learning mechanism, supported by separate sensory pre-processing mechanisms for extracting attributes, such as edges or phonemes, from real world data, can be the basis of biological learning in cortex.

[1]  Pierluigi Crescenzi,et al.  Planning with Biological Neurons and Synapses , 2021, ArXiv.

[2]  Christos H. Papadimitriou,et al.  A Biologically Plausible Parser , 2021, Transactions of the Association for Computational Linguistics.

[3]  Adam Santoro,et al.  Backpropagation and the brain , 2020, Nature Reviews Neuroscience.

[4]  Richard Naud,et al.  Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits , 2020, Nature Neuroscience.

[5]  Christine Grienberger,et al.  Synaptic Plasticity Forms and Functions. , 2020, Annual review of neuroscience.

[6]  Christos H. Papadimitriou,et al.  Brain computation by assemblies of neurons , 2019, Proceedings of the National Academy of Sciences.

[7]  G. Buzsáki The Brain from Inside Out , 2019 .

[8]  James C. R. Whittington,et al.  Theories of Error Back-Propagation in the Brain , 2019, Trends in Cognitive Sciences.

[9]  Santosh S. Vempala,et al.  Random Projection in the Brain and Computation with Assemblies of Neurons , 2019, ITCS.

[10]  Yoshua Bengio,et al.  Dendritic cortical microcircuits approximate the backpropagation algorithm , 2018, NeurIPS.

[11]  Yoshua Bengio,et al.  Dendritic error backpropagation in deep cortical microcircuits , 2017, ArXiv.

[12]  Santosh S. Vempala,et al.  Long Term Memory and the Densest K-Subgraph Problem , 2018, ITCS.

[13]  Timothy P Lillicrap,et al.  Towards deep learning with segregated dendrites , 2016, eLife.

[14]  Colin J. Akerman,et al.  Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.

[15]  Noah D. Goodman,et al.  The logical primitives of thought: Empirical foundations for compositional cognitive models. , 2016, Psychological review.

[16]  R. Quiroga Neuronal codes for visual perception and memory , 2016, Neuropsychologia.

[17]  Giacomo Indiveri,et al.  Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity , 2014, Front. Comput. Neurosci..

[18]  Noah D. Goodman,et al.  Bootstrapping in a language of thought: A formal model of numerical concept learning , 2012, Cognition.

[19]  K. Harris,et al.  Cortical state and attention , 2011, Nature Reviews Neuroscience.

[20]  G. Turrigiano Too many cooks? Intrinsic and synaptic homeostatic mechanisms in cortical circuit refinement. , 2011, Annual review of neuroscience.

[21]  György Buzsáki,et al.  Neural Syntax: Cell Assemblies, Synapsembles, and Readers , 2010, Neuron.

[22]  Leslie G. Valiant,et al.  Experience-Induced Neural Circuits That Achieve High Capacity , 2009, Neural Computation.

[23]  Benjamin Recht,et al.  Random Features for Large-Scale Kernel Machines , 2007, NIPS.

[24]  G. Davis Homeostatic control of neural activity: from phenomenology to molecular design. , 2006, Annual review of neuroscience.

[25]  Leslie G. Valiant,et al.  A neuroidal architecture for cognitive computation , 1998, ICALP.

[26]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[27]  Richard T. Marrocco,et al.  Arousal systems , 1994, Current Opinion in Neurobiology.

[28]  Leslie G. Valiant,et al.  Circuits of the mind , 1994 .

[29]  P. Erdos,et al.  On the evolution of random graphs , 1984 .

[30]  J J Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities. , 1982, Proceedings of the National Academy of Sciences of the United States of America.

[31]  J. Knott The organization of behavior: A neuropsychological theory , 1951 .

[32]  R. Woodworth,et al.  The Integrative Action of the Nervous System , 1908 .

[33]  W. Smith The Integrative Action of the Nervous System , 1907, Nature.