Information Rates and Optimal Decoding in Large Neural Populations

Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design.

[1]  S. Thorpe,et al.  Speed of processing in the human visual system , 1996, Nature.

[2]  M Sahani,et al.  Modelling low-dimensional dynamics in recorded spiking populations , 2011 .

[3]  Joseph J Atick,et al.  Could information theory provide an ecological theory of sensory processing? , 2011, Network.

[4]  Zoubin Ghahramani,et al.  A Unifying Review of Linear Gaussian Models , 1999, Neural Computation.

[5]  H. Sompolinsky,et al.  Mutual information of population codes and distance measures in probability space. , 2001, Physical review letters.

[6]  A. V. D. Vaart,et al.  Asymptotic Statistics: U -Statistics , 1998 .

[7]  E N Brown,et al.  A Statistical Paradigm for Neural Spike Train Decoding Applied to Position Prediction from Ensemble Firing Patterns of Rat Hippocampal Place Cells , 1998, The Journal of Neuroscience.

[8]  David S. Stoffer,et al.  Time series analysis and its applications , 2000 .

[9]  R. Kass,et al.  Approximate Methods for State-Space Models , 2010, Journal of the American Statistical Association.

[10]  Paninski Liam Inferring functional connectivity in an ensemble of retinal ganglion cells sharing a common input , 2009 .

[11]  Sompolinsky,et al.  Theory of correlations in stochastic neural networks. , 1994, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[12]  Wei Wu,et al.  A new look at state-space models for neural data , 2010, Journal of Computational Neuroscience.

[13]  Kamiar Rahnama Rad,et al.  Mean-Field Approximations for Coupled Populations of Generalized Linear Model Spiking Neurons with Markov Refractoriness , 2009, Neural Computation.

[14]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[15]  Herman P. Snippe,et al.  Parameter Extraction from Population Codes: A Critical Assessment , 1996, Neural Computation.

[16]  Donald L. Snyder,et al.  Random Point Processes in Time and Space , 1991 .

[17]  Nicolas Brunel,et al.  Mutual Information, Fisher Information, and Population Coding , 1998, Neural Computation.

[18]  Liam Paninski,et al.  Model-Based Decoding, Information Estimation, and Change-Point Detection Techniques for Multineuron Spike Trains , 2011, Neural Computation.

[19]  Siem Jan Koopman,et al.  Time Series Analysis by State Space Methods , 2001 .

[20]  F. Attneave Some informational aspects of visual perception. , 1954, Psychological review.

[21]  John P. Cunningham,et al.  Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity , 2008, NIPS.

[22]  H. B. Barlow,et al.  Finding Minimum Entropy Codes , 1989, Neural Computation.

[23]  Alexander S. Ecker,et al.  Reassessing optimal neural population codes with neurometric functions , 2011, Proceedings of the National Academy of Sciences.

[24]  Emery N. Brown,et al.  Dynamic Analysis of Neural Encoding by Point Process Adaptive Filtering , 2004, Neural Computation.

[25]  E T Rolls,et al.  Correlations and the encoding of information in the nervous system , 1999, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[26]  Liam Paninski,et al.  Population decoding of motor cortical activity using a generalized linear model with hidden states , 2010, Journal of Neuroscience Methods.

[27]  H. B. Barlow,et al.  Possible Principles Underlying the Transformations of Sensory Messages , 2012 .

[28]  A. V. D. Vaart Asymptotic Statistics: Delta Method , 1998 .

[29]  Andrew R. Barron,et al.  Information-theoretic asymptotics of Bayes methods , 1990, IEEE Trans. Inf. Theory.

[30]  H Sompolinsky,et al.  Simple models for reading neuronal population codes. , 1993, Proceedings of the National Academy of Sciences of the United States of America.

[31]  Emilio Salinas,et al.  Vector reconstruction from firing rates , 1994, Journal of Computational Neuroscience.

[32]  L. Paninski,et al.  Model-based decoding, information estimation, and change- point detection in multi-neuron spike trains , 2006 .

[33]  Liam Paninski,et al.  Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models , 2010, Journal of Computational Neuroscience.

[34]  William Bialek,et al.  Coding and computation with neural spike trains , 1990 .

[35]  Liam Paninski,et al.  Optimal experimental design for sampling voltage on dendritic trees in the low-SNR regime , 2012, Journal of Computational Neuroscience.