Capturing the diversity of biological tuning curves using generative adversarial networks

Tuning curves characterizing the response selectivities of biological neurons often exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or random connectivity also give rise to diverse tuning curves. However, a general framework for fitting such models to experimentally measured tuning curves is lacking. We address this problem by proposing to view mechanistic network models as generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable to compute. Recent advances in machine learning provide ways for fitting generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GAN) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic models in theoretical neuroscience to datasets of measured tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, e.g., the biophysical parameters of individual cells or the particular realization of the full synaptic connectivity matrix, and directly learns model parameters which characterize the statistics of connectivity or of single-cell properties. Another strength of this approach is that it fits the entire, joint distribution of experimental tuning curves, instead of matching a few summary statistics picked a priori by the user. More generally, this framework opens the door to fitting theoretically motivated dynamical network models directly to simultaneously or non-simultaneously recorded neural responses.

[1]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[2]  Eero P. Simoncelli,et al.  Spatio-temporal correlations and visual signalling in a complete neuronal population , 2008, Nature.

[3]  Ian J. Goodfellow,et al.  NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.

[4]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.

[5]  Stefano Fusi,et al.  The Sparseness of Mixed Selectivity Neurons Controls the Generalization–Discrimination Trade-Off , 2013, The Journal of Neuroscience.

[6]  John M. Beggs,et al.  A Maximum Entropy Model Applied to Spatial and Temporal Correlations from Cortical Networks In Vitro , 2008, The Journal of Neuroscience.

[7]  Kenneth D Miller,et al.  Properties of networks with partially structured and partially random connectivity. , 2013, Physical review. E, Statistical, nonlinear, and soft matter physics.

[8]  Alexander S. Ecker,et al.  Improved Estimation and Interpretation of Correlations in Neural Circuits , 2015, PLoS Comput. Biol..

[9]  A. Litwin-Kumar,et al.  Slow dynamics and high variability in balanced cortical networks with clustered connections , 2012, Nature Neuroscience.

[10]  Kenneth D. Miller,et al.  Analysis of the Stabilized Supralinear Network , 2012, Neural Computation.

[11]  Sebastian Nowozin,et al.  Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks , 2017, ICML.

[12]  Xiao-Jing Wang,et al.  The importance of mixed selectivity in complex cognitive tasks , 2013, Nature.

[13]  L. F. Abbott,et al.  Tuning Curves for Arm Posture Control in Motor Cortex Are Consistent with Random Connectivity , 2016, PLoS Comput. Biol..

[14]  R. Plevin,et al.  Approximate Bayesian Computation in Evolution and Ecology , 2011 .

[15]  Eric P. Xing,et al.  On Unifying Deep Generative Models , 2017, ICLR.

[16]  Carl van Vreeswijk,et al.  Power-Law Input-Output Transfer Functions Explain the Contrast-Response and Tuning Properties of Neurons in Visual Cortex , 2011, PLoS Comput. Biol..

[17]  Jonathon Shlens,et al.  The Structure of Multi-Neuron Firing Patterns in Primate Retina , 2006, The Journal of Neuroscience.

[18]  Yi Zhang,et al.  Do GANs actually learn the distribution? An empirical study , 2017, ArXiv.

[19]  D. Hansel,et al.  On the Distribution of Firing Rates in Networks of Cortical Neurons , 2011, The Journal of Neuroscience.

[20]  P. J. Sjöström,et al.  Functional specificity of local synaptic connections in neocortical networks , 2011, Nature.

[21]  Razvan Pascanu,et al.  On the difficulty of training recurrent neural networks , 2012, ICML.

[22]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[23]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[24]  Jonathon Shlens,et al.  The Structure of Large-Scale Synchronized Firing in Primate Retina , 2009, The Journal of Neuroscience.

[25]  R. Shapley,et al.  Orientation Selectivity in Macaque V1: Diversity and Laminar Dependence , 2002, The Journal of Neuroscience.

[26]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[27]  Carl Doersch,et al.  Tutorial on Variational Autoencoders , 2016, ArXiv.

[28]  Michael J. Berry,et al.  Weak pairwise correlations imply strongly correlated network states in a neural population , 2005, Nature.

[29]  Geoffrey E. Hinton,et al.  On the importance of initialization and momentum in deep learning , 2013, ICML.

[30]  Guillaume Hennequin,et al.  Stabilized supralinear network dynamics account for stimulus-induced changes of noise variability in the cortex , 2016, bioRxiv.

[31]  Shakir Mohamed,et al.  Variational Approaches for Auto-Encoding Generative Adversarial Networks , 2017, ArXiv.

[32]  A. Mirlin,et al.  Statistics of energy levels and eigenfunctions in disordered systems , 2000 .

[33]  Dustin Tran,et al.  Hierarchical Implicit Models and Likelihood-Free Variational Inference , 2017, NIPS.

[34]  Thomas K. Berger,et al.  A synaptic organizing principle for cortical neuronal groups , 2011, Proceedings of the National Academy of Sciences.

[35]  A P Georgopoulos,et al.  On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex , 1982, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[36]  J. Zico Kolter,et al.  Gradient descent GAN optimization is locally stable , 2017, NIPS.

[37]  Daniel B. Rubin,et al.  The Stabilized Supralinear Network: A Unifying Circuit Motif Underlying Multi-Input Integration in Sensory Cortex , 2015, Neuron.

[38]  Yingyu Liang,et al.  Generalization and Equilibrium in Generative Adversarial Nets (GANs) , 2017, ICML.

[39]  Joseph H. Solomon,et al.  The Morphology of the Rat Vibrissal Array: A Model for Quantifying Spatiotemporal Patterns of Whisker-Object Contact , 2011, PLoS Comput. Biol..

[40]  Dustin Tran,et al.  Deep and Hierarchical Implicit Models , 2017, ArXiv.

[41]  Harald Haas,et al.  Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication , 2004, Science.

[42]  Sen Song,et al.  Highly Nonrandom Features of Synaptic Connectivity in Local Cortical Circuits , 2005, PLoS biology.

[43]  H. Dale Pharmacology and Nerve-endings (Walter Ernest Dixon Memorial Lecture): (Section of Therapeutics and Pharmacology). , 1935, Proceedings of the Royal Society of Medicine.

[44]  Jean-Michel Marin,et al.  Approximate Bayesian computational methods , 2011, Statistics and Computing.

[45]  P. Strata,et al.  Dale’s principle , 1999, Brain Research Bulletin.