Assemblies of neurons can learn to classify well-separated distributions

Assemblies are patterns of coordinated firing across large populations of neurons, believed to represent higher-level information in the brain, such as memories, concepts, words, and other cognitive categories. Recently, a computational system called the Assembly Calculus (AC) has been proposed, based on a set of biologically plausible operations on assemblies. This system is capable of simulating arbitrary space-bounded computation, and describes quite naturally complex cognitive phenomena such as language. However, the question of whether assemblies can perform the brain’s greatest trick — its ability to learn — has been open. We show that the AC provides a mechanism for learning to classify samples from well-separated classes. We prove rigorously that for simple classification problems, a new assembly that represents each class can be reliably formed in response to a few stimuli from it; this assembly is henceforth reliably recalled in response to new stimuli from the same class. Furthermore, such class assemblies will be distinguishable as long as the respective classes are reasonably separated, in particular when they are clusters of similar assemblies, or more generally divided by a halfspace with margin. Experimentally, we demonstrate the successful formation of assemblies which represent concept classes on synthetic data drawn from these distributions, and also on MNIST, which lends itself to classification through one assembly per digit. Seen as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision — all key attributes of learning in a model of the brain.

[1]  Colin J. Akerman,et al.  Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.

[2]  P. Erdos,et al.  On the evolution of random graphs , 1984 .

[3]  G. Davis Homeostatic control of neural activity: from phenomenology to molecular design. , 2006, Annual review of neuroscience.

[4]  Leslie G. Valiant,et al.  Circuits of the mind , 1994 .

[5]  K. Harris,et al.  Cortical state and attention , 2011, Nature Reviews Neuroscience.

[6]  Yoshua Bengio,et al.  Dendritic cortical microcircuits approximate the backpropagation algorithm , 2018, NeurIPS.

[7]  Leslie G. Valiant,et al.  Experience-Induced Neural Circuits That Achieve High Capacity , 2009, Neural Computation.

[8]  Leslie G. Valiant,et al.  A neuroidal architecture for cognitive computation , 1998, ICALP.

[9]  G. Buzsáki The Brain from Inside Out , 2019 .

[10]  Noah D. Goodman,et al.  Bootstrapping in a language of thought: A formal model of numerical concept learning , 2012, Cognition.

[11]  Richard Naud,et al.  Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits , 2020, Nature Neuroscience.

[12]  Richard T. Marrocco,et al.  Arousal systems , 1994, Current Opinion in Neurobiology.

[13]  Christos H. Papadimitriou,et al.  Brain computation by assemblies of neurons , 2019, Proceedings of the National Academy of Sciences.

[14]  György Buzsáki,et al.  Neural Syntax: Cell Assemblies, Synapsembles, and Readers , 2010, Neuron.

[15]  Santosh S. Vempala,et al.  Random Projection in the Brain and Computation with Assemblies of Neurons , 2019, ITCS.

[16]  Timothy P Lillicrap,et al.  Towards deep learning with segregated dendrites , 2016, eLife.

[17]  W. Smith The Integrative Action of the Nervous System , 1907, Nature.

[18]  Adam Santoro,et al.  Backpropagation and the brain , 2020, Nature Reviews Neuroscience.

[19]  Santosh S. Vempala,et al.  Long Term Memory and the Densest K-Subgraph Problem , 2018, ITCS.

[20]  Yoshua Bengio,et al.  Dendritic error backpropagation in deep cortical microcircuits , 2017, ArXiv.

[21]  J. Knott The organization of behavior: A neuropsychological theory , 1951 .

[22]  Noah D. Goodman,et al.  The logical primitives of thought: Empirical foundations for compositional cognitive models. , 2016, Psychological review.

[23]  R. Quiroga Neuronal codes for visual perception and memory , 2016, Neuropsychologia.

[24]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[25]  Giacomo Indiveri,et al.  Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity , 2014, Front. Comput. Neurosci..