Inception in visual cortex: in vivo-silico loops reveal most exciting images

Much of our knowledge about sensory processing in the brain is based on quasi-linear models and the stimuli that optimally drive them. However, sensory information processing is nonlinear, even in primary sensory areas, and optimizing sensory input is difficult due to the high-dimensional input space. We developed inception loops, a closed-loop experimental paradigm that combines in vivo recordings with in silico nonlinear response modeling to identify the Most Exciting Images (MEIs) for neurons in mouse V1. When presented back to the brain, MEIs indeed drove their target cells significantly better than the best stimuli identified by linear models. The MEIs exhibited complex spatial features that deviated from the textbook ideal of V1 as a bank of Gabor filters. Inception loops represent a widely applicable new approach to dissect the neural mechanisms of sensation.

[1]  Leon A. Gatys,et al.  A Neural Algorithm of Artistic Style , 2015, ArXiv.

[2]  J. DiCarlo,et al.  Using goal-driven deep learning models to understand sensory cortex , 2016, Nature Neuroscience.

[3]  Bolei Zhou,et al.  Understanding Intra-Class Knowledge Inside CNN , 2015, ArXiv.

[4]  Sepp Hochreiter,et al.  Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.

[5]  Birgit Schmidt,et al.  Positioning and Power in Academic Publishing: Players, Agents and Agendas, 20th International Conference on Electronic Publishing, Göttingen, Germany, June 7-9, 2016 , 2016, ELPUB.

[6]  Liam Paninski,et al.  Statistical models for neural encoding, decoding, and optimal stimulus design. , 2007, Progress in brain research.

[7]  John D. Hunter,et al.  Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.

[8]  Gaël Varoquaux,et al.  The NumPy Array: A Structure for Efficient Numerical Computation , 2011, Computing in Science & Engineering.

[9]  Tim Gollisch,et al.  From response to stimulus: adaptive sampling in sensory physiology , 2007, Current Opinion in Neurobiology.

[10]  E Harth,et al.  Alopex: a stochastic method for determining visual receptive fields. , 1974, Vision research.

[11]  J. P. Jones,et al.  An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. , 1987, Journal of neurophysiology.

[12]  C. Connor,et al.  Population coding of shape in area V4 , 2002, Nature Neuroscience.

[13]  D. Hubel,et al.  Receptive fields of single neurones in the cat's striate cortex , 1959, The Journal of physiology.

[14]  Ming Li,et al.  Convolutional neural network models of V1 responses to complex patterns , 2018, Journal of Computational Neuroscience.

[15]  et al.,et al.  Jupyter Notebooks - a publishing format for reproducible computational workflows , 2016, ELPUB.

[16]  Brett J. Graham,et al.  A Self-Calibrating, Camera-Based Eye Tracker for the Recording of Rodent Eye Movements , 2010, Front. Neurosci..

[17]  K. Svoboda,et al.  A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging , 2016, bioRxiv.

[18]  George H. Denfield,et al.  Pupil Fluctuations Track Fast Switching of Cortical States during Quiet Wakefulness , 2014, Neuron.

[19]  David Pfau,et al.  Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data , 2016, Neuron.

[20]  James J DiCarlo,et al.  Neural population control via deep image synthesis , 2018, Science.

[21]  Peter Földiák,et al.  Stimulus optimisation in primary visual cortex , 2001, Neurocomputing.

[22]  Andrea Vedaldi,et al.  Understanding Image Representations by Measuring Their Equivariance and Equivalence , 2014, International Journal of Computer Vision.

[23]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[24]  Geoffrey E. Hinton,et al.  Dynamic Routing Between Capsules , 2017, NIPS.

[25]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[26]  Jack L. Gallant,et al.  The DeepTune framework for modeling and characterizing neurons in visual cortex area V4 , 2018, bioRxiv.

[27]  Jason Yosinski,et al.  Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks , 2016, ArXiv.

[28]  E J Chichilnisky,et al.  A simple white noise analysis of neuronal light responses , 2001, Network.

[29]  Alexander S. Ecker,et al.  Stimulus domain transfer in recurrent models for large scale cortical population prediction on video , 2018, NeurIPS.

[30]  Thomas Brox,et al.  Synthesizing the preferred inputs for neurons in neural networks via deep generator networks , 2016, NIPS.

[31]  Alexander S. Ecker,et al.  Neural system identification for large populations separating "what" and "where" , 2017, NIPS.

[32]  E. Adrian,et al.  The discharge of impulses in motor nerve fibres , 1929, The Journal of physiology.

[33]  Dirk Merkel,et al.  Docker: lightweight Linux containers for consistent development and deployment , 2014 .

[34]  Bonnie E. Shook-Sa,et al.  . CC-BY-NC-ND 4 . 0 International licenseIt is made available under a is the author / funder , who has granted medRxiv a license to display the preprint in perpetuity , 2021 .

[35]  E H Adelson,et al.  Spatiotemporal energy models for the perception of motion. , 1985, Journal of the Optical Society of America. A, Optics and image science.

[36]  Elijah D. Christensen,et al.  Using deep learning to probe the neural code for images in primary visual cortex , 2019, Journal of vision.

[37]  Andreas S. Tolias,et al.  DataJoint: A Simpler Relational Data Model , 2018, ArXiv.

[38]  Leon A. Gatys,et al.  Deep convolutional models improve predictions of macaque V1 responses to natural images , 2019, PLoS Comput. Biol..

[39]  H. K. HAltTLIn THE RESPONSE OF SINGLE OPTIC NERVE FIBERS OF THE VERTEBRATE EYE TO ILLUMINATION OF THE RETINA , 2004 .

[40]  Eero P. Simoncelli,et al.  Natural image statistics and neural representation. , 2001, Annual review of neuroscience.

[41]  Tom Tryon,et al.  The Other , 1971 .

[42]  William F. Kindel,et al.  Using deep learning to reveal the neural code for images in primary visual cortex , 2017, ArXiv.

[43]  David J. Field,et al.  What is the other 85% of V1 doing? , 2004 .

[44]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[45]  John F. Kalaska,et al.  Computational neuroscience : theoretical insights into brain function , 2007 .

[46]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[47]  Daniel L. K. Yamins,et al.  Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition , 2014, PLoS Comput. Biol..

[48]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[49]  M. Stryker,et al.  A Cortical Circuit for Gain Control by Behavioral State , 2014, Cell.

[50]  Ian Nauhaus,et al.  Topography and Areal Organization of Mouse Visual Cortex , 2014, The Journal of Neuroscience.

[51]  Andrea Vedaldi,et al.  Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[52]  Pascal Vincent,et al.  Visualizing Higher-Layer Features of a Deep Network , 2009 .

[53]  James A. Bednar,et al.  Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes , 2016, PLoS Comput. Biol..

[54]  David D. Cox,et al.  Opinion TRENDS in Cognitive Sciences Vol.11 No.8 Untangling invariant object recognition , 2022 .