Information Geometry of the Retinal Representation Manifold

The ability to discriminate visual stimuli is constrained by their retinal representations. Previous studies of visual discriminability were limited to either low-dimensional artificial stimuli or theoretical considerations without a realistic model. Here we propose a novel framework for understanding stimulus discriminability achieved by retinal representations of naturalistic stimuli with the method of information geometry. To model the joint probability distribution of neural responses conditioned on the stimulus, we created a stochastic encoding model of a population of salamander retinal ganglion cells based on a three-layer convolutional neural network model. This model not only accurately captured the mean response to natural scenes but also a variety of second-order statistics. With the model and the proposed theory, we are able to compute the Fisher information metric over stimuli and study the most discriminable stimulus directions. We found that the most discriminable stimulus varied substantially, allowing an examination of the relationship between the most discriminable stimulus and the current stimulus. We found that the most discriminative response mode is often aligned with the most stochastic mode. This finding carries the important implication that under natural scenes noise correlations in the retina are information-limiting rather than aiding in increasing information transmission as has been previously speculated. We observed that sensitivity saturates less in the population than for single cells and also that Fisher information varies less than sensitivity as a function of firing rate. We conclude that under natural scenes, population coding benefits from complementary coding and helps to equalize the information carried by different firing rates, which may facilitate decoding of the stimulus under principles of information maximization.

[1]  H. Sompolinsky,et al.  Neural representational geometry underlies few-shot concept learning , 2022, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Lane T. McIntosh,et al.  A mechanistically interpretable model of the retinal neural code for natural scenes with multiscale adaptive dynamics , 2021, bioRxiv.

[3]  H. Sompolinsky,et al.  A Minimum Perturbation Theory of Deep Perceptual Learning , 2021, bioRxiv.

[4]  F. Rieke,et al.  The Geometry of Information Coding in Correlated Neural Populations. , 2021, Annual review of neuroscience.

[5]  Hongkui Zeng,et al.  Fundamental bounds on the fidelity of sensory cortical coding , 2020, Nature.

[6]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[7]  Lane T. McIntosh,et al.  From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction , 2019, NeurIPS.

[8]  Haim Sompolinsky,et al.  Separability and geometry of object manifolds in deep neural networks , 2019, Nature Communications.

[9]  Surya Ganguli,et al.  Deep learning models reveal internal structure and diverse computations in the retina under natural scenes , 2018, bioRxiv.

[10]  Surya Ganguli,et al.  Deep Learning Models of the Retinal Response to Natural Scenes , 2017, NIPS.

[11]  Xue-Xin Wei,et al.  Mutual Information, Fisher Information, and Efficient Coding , 2016, Neural Computation.

[12]  Alexandre Pouget,et al.  Origin of information-limiting noise correlations , 2015, Proceedings of the National Academy of Sciences.

[13]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[14]  A. Pouget,et al.  Information-limiting correlations , 2014, Nature Neuroscience.

[15]  F. Opitz Information geometry and its applications , 2012, 2012 9th European Radar Conference.

[16]  M. Meister,et al.  Decorrelation and efficient coding by retinal ganglion cells , 2012, Nature Neuroscience.

[17]  S. Baccus,et al.  Coordinated dynamic encoding in the retina using opposing forms of plasticity , 2011, Nature Neuroscience.

[18]  Vijay Balasubramanian,et al.  Natural Images from the Birthplace of the Human Eye , 2011, PloS one.

[19]  F. Rieke,et al.  Noise correlations improve response fidelity and stimulus encoding , 2010, Nature.

[20]  Daeyeol Lee,et al.  Effects of noise correlations on information encoding and decoding. , 2006, Journal of neurophysiology.

[21]  A. Pouget,et al.  Neural correlations, population coding and computation , 2006, Nature Reviews Neuroscience.

[22]  E J Chichilnisky,et al.  Prediction and Decoding of Retinal Ganglion Cell Responses with a Probabilistic Spiking Model , 2005, The Journal of Neuroscience.

[23]  R. Reid,et al.  Predicting Every Spike A Model for the Responses of Visual Neurons , 2001, Neuron.

[24]  E J Chichilnisky,et al.  A simple white noise analysis of neuronal light responses , 2001, Network.

[25]  Michael J. Berry,et al.  The structure and precision of retinal spike trains. , 1997, Proceedings of the National Academy of Sciences of the United States of America.

[26]  S. Laughlin A Simple Coding Procedure Enhances a Neuron's Information Capacity , 1981, Zeitschrift fur Naturforschung. Section C, Biosciences.

[27]  Binxu Wang,et al.  A Geometric Analysis of Deep Generative Image Models and Its Applications , 2021, ICLR.

[28]  Asok Ray,et al.  Principles of Riemannian Geometry in Neural Networks , 2017, NIPS.

[29]  Peter Dayan,et al.  The Effect of Correlated Variability on the Accuracy of a Population Code , 1999, Neural Computation.

[30]  D G Pelli,et al.  The VideoToolbox software for visual psychophysics: transforming numbers into movies. , 1997, Spatial vision.

[31]  D H Brainard,et al.  The Psychophysics Toolbox. , 1997, Spatial vision.