Bayesian synaptic plasticity makes predictions about plasticity experiments in vivo

Humans and other animals learn by updating synaptic weights in the brain. Rapid learning allows animals to adapt quickly to changes in their environment, giving them a large selective advantage. As brains have been evolving for several hundred million years, we might expect biological learning rules to be close to optimal, by exploiting all locally available information in order to learn as rapidly as possible. However, no previously proposed learning rules are optimal in this sense. We therefore use Bayes theorem to derive optimal learning rules for supervised, unsupervised and reinforcement learning. As expected, these rules prove to be significantly more effective than the best classical learning rules. Our learning rules make two predictions about the results of plasticity experiments in active networks. First, we predict that learning rates should vary across time, increasing when fewer inputs are active. Second, we predict that learning rates should vary across synapses, being higher for synapses whose presynaptic cells have a lower average firing rate. Finally, our methods are extremely flexible, allowing the derivation of optimal learning rules based solely on the information that is assumed, or known, to be available to the synapse. This flexibility should allow for the derivation of optimal learning rules for progressively more complex and realistic synaptic and neural models --- allowing us to connect theory with complex biological reality.

[1]  Wulfram Gerstner,et al.  Code-specific policy gradient rules for spiking neurons , 2009, NIPS.

[2]  K. Svoboda,et al.  Neural Activity in Barrel Cortex Underlying Vibrissa-Based Object Localization in Mice , 2010, Neuron.

[3]  G. Buzsáki,et al.  Preconfigured, skewed distribution of firing rates in the hippocampus and entorhinal cortex. , 2013, Cell reports.

[4]  P. J. Sjöström,et al.  Rate, Timing, and Cooperativity Jointly Determine Cortical Synaptic Plasticity , 2001, Neuron.

[5]  G. Ellis‐Davies,et al.  Structural basis of long-term potentiation in single dendritic spines , 2004, Nature.

[6]  J. Maunsell,et al.  Differences in Gamma Frequencies across Visual Cortex Restrict Their Possible Use in Computation , 2010, Neuron.

[7]  T. Branco,et al.  The probability of neurotransmitter release: variability and feedback control at single synapses , 2009, Nature Reviews Neuroscience.

[8]  P. Sterling,et al.  How Much the Eye Tells the Brain , 2006, Current Biology.

[9]  Sen Song,et al.  Highly Nonrandom Features of Synaptic Connectivity in Local Cortical Circuits , 2005, PLoS biology.

[10]  Alexander S. Ecker,et al.  Decorrelated Neuronal Firing in Cortical Microcircuits , 2010, Science.

[11]  Rajesh P. N. Rao,et al.  Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. , 1999 .

[12]  Nathan Intrator,et al.  Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions , 1992, Neural Networks.

[13]  John P. Cunningham,et al.  Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity , 2008, NIPS.

[14]  Richard E Thompson,et al.  Cerebellar circuits and synaptic mechanisms involved in classical eyeblink conditioning , 1997, Trends in Neurosciences.

[15]  A. Destexhe,et al.  Are corticothalamic ‘up’ states fragments of wakefulness? , 2007, Trends in Neurosciences.

[16]  Michael Robert DeWeese,et al.  A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields , 2011, PLoS Comput. Biol..

[17]  Bernard Widrow,et al.  Adaptive switching circuits , 1988 .

[18]  K. Doya Complementary roles of basal ganglia and cerebellum in learning and motor control , 2000, Current Opinion in Neurobiology.

[19]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[20]  Eduardo F. Morales,et al.  An Introduction to Reinforcement Learning , 2011 .

[21]  Y. Loewenstein,et al.  Multiplicative Dynamics Underlie the Emergence of the Log-Normal Distribution of Spine Sizes in the Neocortex In Vivo , 2011, The Journal of Neuroscience.

[22]  Jeffrey K. Uhlmann,et al.  New extension of the Kalman filter to nonlinear systems , 1997, Defense, Security, and Sensing.

[23]  Ashutosh Kumar Singh,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2010 .

[24]  Arthur Gelb,et al.  Applied Optimal Estimation , 1974 .