Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.

[1]  William Bialek,et al.  Real-time performance of a movement-sensitive neuron in the blowfly visual system: coding and information transfer in short spike sequences , 1988, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[2]  Ole Winther,et al.  Gaussian Processes for Classification: Mean-Field Algorithms , 2000, Neural Computation.

[3]  J. Touryan,et al.  Isolation of Relevant Visual Features from Random Stimuli for Cortical Complex Cells , 2002, The Journal of Neuroscience.

[4]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[5]  J. Csicsvari,et al.  Organization of cell assemblies in the hippocampus , 2003, Nature.

[6]  M. Seeger Expectation Propagation for Exponential Families , 2005 .

[7]  Eero P. Simoncelli,et al.  Spatiotemporal Elements of Macaque V1 Receptive Fields , 2005, Neuron.

[8]  Tom Minka,et al.  Expectation Propagation for approximate Bayesian inference , 2001, UAI.

[9]  Eero P. Simoncelli,et al.  To appear in: The New Cognitive Neurosciences, 3rd edition Editor: M. Gazzaniga. MIT Press, 2004. Characterization of Neural Responses with Stochastic Stimuli , 2022 .

[10]  Florian Steinke,et al.  Bayesian Inference and Optimal Design in the Sparse Linear Model , 2007, AISTATS.

[11]  E. S. Chornoboy,et al.  Maximum likelihood identification of neural point process systems , 1988, Biological Cybernetics.

[12]  M. Wilson,et al.  Analyzing Functional Connectivity Using a Network Likelihood Model of Ensemble Neural Spiking Activity , 2005, Neural Computation.

[13]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[14]  E J Chichilnisky,et al.  Prediction and Decoding of Retinal Ganglion Cell Responses with a Probabilistic Spiking Model , 2005, The Journal of Neuroscience.

[15]  L. Paninski Maximum likelihood estimation of cascade point-process neural encoding models , 2004, Network.

[16]  Richard H Masland,et al.  The spatial filtering properties of local edge detectors and brisk–sustained retinal ganglion cells , 2005, The European journal of neuroscience.

[17]  Donald L. Snyder,et al.  Random Point Processes in Time and Space , 1991 .