Bayesian Inference for Sparse Generalized Linear Models

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.

[1]  Thore Graepel,et al.  Poisson-Networks: A Model for Structured Poisson Processes. , 2005 .

[2]  J. Csicsvari,et al.  Organization of cell assemblies in the hippocampus , 2003, Nature.

[3]  Florian Steinke,et al.  Bayesian Inference and Optimal Design in the Sparse Linear Model , 2007, AISTATS.

[4]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[5]  M. Seeger Expectation Propagation for Exponential Families , 2005 .

[6]  Richard H Masland,et al.  The spatial filtering properties of local edge detectors and brisk–sustained retinal ganglion cells , 2005, The European journal of neuroscience.

[7]  Daphne Koller,et al.  Expectation Propagation for Continuous Time Bayesian Networks , 2005, UAI.

[8]  Bhaskar D. Rao,et al.  Perspectives on Sparse Bayesian Learning , 2003, NIPS.

[9]  Donald L. Snyder,et al.  Random Point Processes in Time and Space , 1991 .

[10]  Tom Minka,et al.  Expectation Propagation for approximate Bayesian inference , 2001, UAI.

[11]  L. Paninski Maximum likelihood estimation of cascade point-process neural encoding models , 2004, Network.

[12]  Susan A. Murphy,et al.  Monographs on statistics and applied probability , 1990 .

[13]  P. McCullagh,et al.  Generalized Linear Models , 1984 .

[14]  G. Casella,et al.  The Bayesian Lasso , 2008 .

[15]  Nicole C. Rust,et al.  Do We Know What the Early Visual System Does? , 2005, The Journal of Neuroscience.

[16]  Yuan Qi,et al.  Predictive automatic relevance determination by expectation propagation , 2004, ICML.

[17]  Ole Winther,et al.  Gaussian Processes for Classification: Mean-Field Algorithms , 2000, Neural Computation.

[18]  William Bialek,et al.  Spikes: Exploring the Neural Code , 1996 .

[19]  Eero P. Simoncelli,et al.  To appear in: The New Cognitive Neurosciences, 3rd edition Editor: M. Gazzaniga. MIT Press, 2004. Characterization of Neural Responses with Stochastic Stimuli , 2022 .

[20]  Thomas P. Minka,et al.  Divergence measures and message passing , 2005 .

[21]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[22]  W. Gilks,et al.  Adaptive Rejection Sampling for Gibbs Sampling , 1992 .

[23]  Michael J. Berry,et al.  The structure and precision of retinal spike trains. , 1997, Proceedings of the National Academy of Sciences of the United States of America.

[24]  Darren J. Wilkinson Stochastic Modelling for Systems Biology , 2006 .