Geometric ergodicity of Gibbs samplers for the Horseshoe and its regularized variants

The Horseshoe is a widely used and popular continuous shrinkage prior for highdimensional Bayesian linear regression. Recently, regularized versions of the Horseshoe prior have also been introduced in the literature. Various Gibbs sampling Markov chains have been developed in the literature to generate approximate samples from the corresponding intractable posterior densities. Establishing geometric ergodicity of these Markov chains provides crucial technical justification for the accuracy of asymptotic standard errors for Markov chain based estimates of posterior quantities. In this paper, we establish geometric ergodicity for various Gibbs samplers corresponding to the Horseshoe prior and its regularized variants in the context of linear regression. First, we establish geometric ergodicity of a Gibbs sampler for the original Horseshoe posterior under strictly weaker conditions than existing analyses in the literature. Second, we consider the regularized Horseshoe prior introduced in [17], and prove geometric ergodicity for a Gibbs sampling Markov chain to sample from the corresponding posterior without any truncation constraint on the global and local shrinkage parameters. Finally, we consider a variant of this regularized Horseshoe prior introduced in [14], and again establish geometric ergodicity for a Gibbs sampling Markov chain to sample from the corresponding posterior. MSC 2010 subject classifications: Primary 60J05, 60J20; secondary 33C10.

[1]  Enes Makalic,et al.  A Simple Sampler for the Horseshoe Estimator , 2015, IEEE Signal Processing Letters.

[2]  James G. Scott,et al.  The horseshoe estimator for sparse signals , 2010 .

[3]  M. Suchard,et al.  Shrinkage with shrunken shoulders: inference via geometrically / uniformly ergodic Gibbs sampler , 2019 .

[4]  Jingyu He,et al.  Bayesian Factor Model Shrinkage for Linear IV Regression With Many Instruments , 2018 .

[5]  B. Mallick,et al.  Fast sampling with Gaussian scale-mixture priors in high-dimensional regression. , 2015, Biometrika.

[6]  James M. Flegal,et al.  Batch means and spectral variance estimators in Markov chain Monte Carlo , 2008, 0811.1729.

[7]  Aki Vehtari,et al.  Sparsity information and regularization in the horseshoe and other shrinkage priors , 2017, 1707.01694.

[8]  P. Diaconis,et al.  Gibbs sampling, exponential families and orthogonal polynomials , 2008, 0808.3852.

[9]  Anirban Bhattacharya,et al.  Scalable Approximate MCMC Algorithms for the Horseshoe Prior , 2020, J. Mach. Learn. Res..

[10]  J. Hobert,et al.  Trace Class Markov Chains for Bayesian Inference with Generalized Double Pareto Shrinkage Priors , 2017 .

[11]  J. Rosenthal Minorization Conditions and Convergence Rates for Markov Chain Monte Carlo , 1995 .

[12]  James G. Scott,et al.  On the half-cauchy prior for a global scale parameter , 2011, 1104.4937.

[13]  N. Pillai,et al.  Dirichlet–Laplace Priors for Optimal Shrinkage , 2014, Journal of the American Statistical Association.

[14]  A. Bhattacharya,et al.  Coupled Markov chain Monte Carlo for high-dimensional regression with Half-t priors. , 2020 .

[15]  Kshitij Khare,et al.  Geometric ergodicity for Bayesian shrinkage models , 2014 .

[16]  Richard L. Tweedie,et al.  Markov Chains and Stochastic Stability , 1993, Communications and Control Engineering Series.

[17]  T. Lu,et al.  Inverses of 2 × 2 block matrices , 2002 .

[18]  Gareth O. Roberts,et al.  Markov Chains and De‐initializing Processes , 2001 .

[19]  Kshitij Khare,et al.  Geometric ergodicity of the Bayesian lasso , 2013 .

[20]  M. Betancourt,et al.  On the geometric ergodicity of Hamiltonian Monte Carlo , 2016, Bernoulli.