The Brain as an Efficient and Robust Adaptive Learner

Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level.

[1]  H. Sompolinsky,et al.  Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity , 1996, Science.

[2]  Karl J. Friston,et al.  Active inference, sensory attenuation and illusions , 2013, Cognitive Processing.

[3]  Nuo Li,et al.  Robust neuronal dynamics in premotor cortex during motor planning , 2016, Nature.

[4]  Renaud Jardri,et al.  Circular inference: mistaken belief, misplaced trust , 2016, Current Opinion in Behavioral Sciences.

[5]  Christian K. Machens,et al.  Optimal compensation for neuron death , 2015, bioRxiv.

[6]  C. Gilbert,et al.  Top-down influences on visual processing , 2013, Nature Reviews Neuroscience.

[7]  Colin J. Akerman,et al.  Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.

[8]  Hilbert J. Kappen,et al.  Learning Universal Computations with Spikes , 2015, PLoS Comput. Biol..

[9]  Christian K. Machens,et al.  Learning optimal spike-based representations , 2012, NIPS.

[10]  S. Eichler,et al.  E-I Balance and Human Diseases – from Molecules to Networking , 2008, Frontiers in molecular neuroscience.

[11]  L. F. Abbott,et al.  Building functional networks of spiking model neurons , 2016, Nature Neuroscience.

[12]  Henning Sprekeler,et al.  Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks , 2011, Science.

[13]  Zoubin Ghahramani,et al.  Computational principles of movement neuroscience , 2000, Nature Neuroscience.

[14]  Chris Eliasmith,et al.  Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems , 2004, IEEE Transactions on Neural Networks.

[15]  Lief E. Fenno,et al.  Neocortical excitation/inhibition balance in information processing and social dysfunction , 2011, Nature.

[16]  Johanni Brea,et al.  Prospective Coding by Spiking Neurons , 2016, PLoS Comput. Biol..

[17]  Steven J Schiff,et al.  Seizures as imbalanced up states: excitatory and inhibitory conductances during seizure-like events. , 2013, Journal of neurophysiology.

[18]  Sophie Denève,et al.  Computational Account of Spontaneous Activity as a Signature of Predictive Coding , 2017, PLoS Comput. Biol..

[19]  Christian K. Machens,et al.  Predictive Coding of Dynamical Variables in Balanced Spiking Networks , 2013, PLoS Comput. Biol..

[20]  L. F. Abbott,et al.  Generating Coherent Patterns of Activity from Chaotic Neural Networks , 2009, Neuron.

[21]  Y. Dan,et al.  Long-range and local circuits for top-down modulation of visual cortex processing , 2014, Science.

[22]  Robert M. Sanner,et al.  Gaussian Networks for Direct Adaptive Control , 1991, 1991 American Control Conference.

[23]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[24]  M. Chalk,et al.  Neural oscillations as a signature of efficient coding in the presence of synaptic delays , 2015, bioRxiv.

[25]  Rajesh P. N. Rao,et al.  Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. , 1999 .

[26]  Christian K. Machens,et al.  Efficient codes and balanced networks , 2016, Nature Neuroscience.

[27]  Rajesh P. N. Rao,et al.  Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex , 1997, Neural Computation.

[28]  Jean-Jacques E. Slotine,et al.  Adaptive sliding controller synthesis for non-linear systems , 1986 .

[29]  Walter Senn,et al.  Somato-dendritic Synaptic Plasticity and Error-backpropagation in Active Dendrites , 2016, PLoS Comput. Biol..

[30]  M. Carandini,et al.  The Nature of Shared Cortical Variability , 2015, Neuron.

[31]  J. Movshon,et al.  The statistical reliability of signals in single neurons in cat and monkey visual cortex , 1983, Vision Research.

[32]  Chris Eliasmith,et al.  A Unified Approach to Building and Controlling Spiking Attractor Networks , 2005, Neural Computation.

[33]  Mitsuo Kawato,et al.  MOSAIC Model for Sensorimotor Learning and Control , 2001, Neural Computation.

[34]  Adrienne L Fairhall,et al.  Constructing Precisely Computing Networks with Biophysical Spiking Neurons , 2014, The Journal of Neuroscience.

[35]  Sophie Denève,et al.  Enforcing balance allows local supervised learning in spiking recurrent networks , 2015, NIPS.

[36]  M. Scanziani,et al.  How Inhibition Shapes Cortical Activity , 2011, Neuron.

[37]  Andrew M. Clark,et al.  Stimulus onset quenches neural variability: a widespread cortical phenomenon , 2010, Nature Neuroscience.

[38]  Sophie Denève,et al.  Spike-Based Population Coding and Working Memory , 2011, PLoS Comput. Biol..

[39]  W. Senn,et al.  Learning by the Dendritic Prediction of Somatic Spiking , 2014, Neuron.

[40]  Wieland Brendel,et al.  Unsupervised learning of an efficient short-term memory network , 2014, NIPS.