The Bayesian Lasso

The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors. Gibbs sampling from this posterior is possible using an expanded hierarchy with conjugate normal priors for the regression parameters and independent exponential priors on their variances. A connection with the inverse-Gaussian distribution provides tractable full conditional distributions. The Bayesian Lasso provides interval estimates (Bayesian credible intervals) that can guide variable selection. Moreover, the structure of the hierarchical model provides both Bayesian and likelihood methods for selecting the Lasso parameter. Slight modifications lead to Bayesian versions of other Lasso-related estimation methods, including bridge regression and a robust variant.

[1]  G. C. Tiao,et al.  Bayesian inference in statistical analysis , 1973 .

[2]  A. C. Atkinson,et al.  The Simulation of Generalized Inverse Gaussian and Hyperbolic Random Variables , 1982 .

[3]  M. West Outlier Models and Prior Distributions in Bayesian Linear Regression , 1984 .

[4]  Another conjugate family for the normal distribution , 1986 .

[5]  B. Jørgensen Exponential Dispersion Models , 1987 .

[6]  M. West On scale mixtures of normal distributions , 1987 .

[7]  J. Leroy Folks,et al.  The Inverse Gaussian Distribution: Theory: Methodology, and Applications , 1988 .

[8]  T. J. Mitchell,et al.  Bayesian Variable Selection in Linear Regression , 1988 .

[9]  J. Leroy Folks,et al.  The Inverse Gaussian Distribution , 1989 .

[10]  J. Friedman,et al.  A Statistical View of Some Chemometrics Regression Tools , 1993 .

[11]  E. George,et al.  Journal of the American Statistical Association is currently published by American Statistical Association. , 2007 .

[12]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[13]  Tilmann Gneiting,et al.  Normal scale mixtures and dual probability densities , 1997 .

[14]  E. George,et al.  APPROACHES FOR BAYESIAN VARIABLE SELECTION , 1997 .

[15]  C. McCulloch Maximum Likelihood Algorithms for Generalized Linear Mixed Models , 1997 .

[16]  Wenjiang J. Fu Penalized Regressions: The Bridge versus the Lasso , 1998 .

[17]  M. Clyde,et al.  Multiple shrinkage and subset selection in wavelets , 1998 .

[18]  J. Booth,et al.  Maximizing generalized linear mixed model likelihoods with an automated Monte Carlo EM algorithm , 1999 .

[19]  M. Clyde,et al.  Flexible empirical Bayes estimation for wavelets , 2000 .

[20]  Wenjiang J. Fu,et al.  Asymptotics for lasso-type estimators , 2000 .

[21]  M. R. Osborne,et al.  On the LASSO and its Dual , 2000 .

[22]  M. R. Osborne,et al.  A new approach to variable selection in least squares problems , 2000 .

[23]  Robert Tibshirani,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2001, Springer Series in Statistics.

[24]  Jianqing Fan,et al.  Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .

[25]  G. Casella Empirical Bayes Gibbs sampling. , 2001, Biostatistics.

[26]  Mário A. T. Figueiredo Adaptive Sparseness for Supervised Learning , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[27]  D. Madigan Discussion of Least Angle Regression , 2003 .

[28]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[29]  C. Macdonald Casella , 2004, Tempo.

[30]  DISCUSSION OF "LEAST ANGLE REGRESSION" BY EFRON ET AL. , 2004, math/0406468.

[31]  D. Madigan,et al.  [Least Angle Regression]: Discussion , 2004 .

[32]  David Madigan,et al.  Discussion of "Least angle regression" by Efron et al , 2004 .

[33]  Bani K. Mallick,et al.  Gene selection using a two-level hierarchical Bayesian model , 2004, Bioinform..

[34]  J. S. Rao,et al.  Spike and slab variable selection: Frequentist and Bayesian strategies , 2005, math/0505633.

[35]  M. Yuan,et al.  Efficient Empirical Bayes Variable Selection and Estimation in Linear Models , 2005 .

[36]  Robert Tibshirani,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition , 2001, Springer Series in Statistics.

[37]  G. Casella,et al.  Penalized regression, standard errors, and Bayesian lassos , 2010 .