Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

How are the spatial patterns of spontaneous and evoked population responses related? We study the impact of connectivity on the spatial pattern of fluctuations in the input-generated response, by comparing the distribution of evoked and intrinsically generated activity across the different units of a neural network. We develop a complementary approach to principal component analysis in which separate high-variance directions are derived for each input condition. We analyze subspace angles to compute the difference between the shapes of trajectories corresponding to different network states, and the orientation of the low-dimensional subspaces that driven trajectories occupy within the full space of neuronal activity. In addition to revealing how the spatiotemporal structure of spontaneous activity affects input-evoked responses, these methods can be used to infer input selectivity induced by network dynamics from experimentally accessible measures of spontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging experiments). We conclude that the absence of a detailed spatial map of afferent inputs and cortical connectivity does not limit our ability to design spatially extended stimuli that evoke strong responses.

[1]  Nils Bertschinger,et al.  Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks , 2004, Neural Computation.

[2]  Oren Shriki,et al.  Rate Models for Conductance-Based Cortical Neuronal Networks , 2003, Neural Computation.

[3]  L. Abbott,et al.  Eigenvalue spectra of random matrices for neural networks. , 2006, Physical review letters.

[4]  Sommers,et al.  Chaos in random neural networks. , 1988, Physical review letters.

[5]  H. Sompolinsky,et al.  Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity , 1996, Science.

[6]  V. Jayaraman,et al.  Encoding and Decoding of Overlapping Odor Sequences , 2006, Neuron.

[7]  A. Grinvald,et al.  Dynamics of Ongoing Activity: Explanation of the Large Variability in Evoked Cortical Responses , 1996, Science.

[8]  Schuster,et al.  Suppressing chaos in neural networks by noise. , 1992, Physical review letters.

[9]  L. Abbott,et al.  Stimulus-dependent suppression of chaos in recurrent neural networks. , 2009, Physical review. E, Statistical, nonlinear, and soft matter physics.

[10]  Haim Sompolinsky,et al.  Chaotic Balanced State in a Model of Cortical Circuits , 1998, Neural Computation.

[11]  Ilse C. F. Ipsen,et al.  The Angle Between Complementary Subspaces , 1995 .

[12]  Haim Sompolinsky,et al.  Stimulus-dependent suppression of intrinsic variability in recurrent neural networks , 2010, BMC Neuroscience.

[13]  A Grinvald,et al.  Coherent spatiotemporal patterns of ongoing activity revealed by real-time optical imaging coupled with single-unit recording in the cat visual cortex. , 1995, Journal of neurophysiology.

[14]  Xiao-Jing Wang,et al.  A Recurrent Network Mechanism of Time Integration in Perceptual Decisions , 2006, The Journal of Neuroscience.

[15]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[16]  Kanaka Rajan What do random matrices tell us about the brain ? , 2010 .