SUMMARY The rate of convergence of the Gibbs sampler is discussed. The Gibbs sampler is a Monte Carlo simulation method with extensive application to computational issues in the Bayesian paradigm. Conditions for the geometric rate of convergence of the algorithm for discrete and continuous parameter spaces are derived, and an illustrative exponential family example is given. This paper investigates conditions under which the Gibbs sampler (Gelfand and Smith, 1990; Tanner and Wong, 1987; Geman and Geman, 1984) converges at a geometric rate. The main results appear in Sections 2 and 3, where geometric convergence results are established, with respect to total variation and supremum norms under fairly natural conditions on the underlying distribution. For ease of exposition, we shall concentrate on the two most commonly encountered situations, where the state space is finite or continuous. All our results will establish uniform convergence, a strong form of geometric convergence, under appropriate regularity conditions. Uniform convergence is a useful property in its own right but also happens to be a sufficient condition for certain ergodic central limit theorems. Such results are important for estimation in Markov chain simulation but will not be considered in detail here. Our approach is to apply the theory of Markov chains to the specific Gibbs sampler case. In the finite state space case, uniform ergodicity is automatic. However, the situation is more complicated for continuous state spaces where even well-behaved underlying distributions can give rise to Markov chains which converge slowly, or have unbounded kernels. We give two results in this context, corollary 2 and corollary 3 which establish uniform convergence under different sets of conditions on the underlying density. Finally we apply these results to an example of a Bayesian hierarchical model where regularity conditions for geometric convergence are naturally satisfied. Here the hierarchical structure of the model is crucial in permitting application of corollary 3.
[1]
N. Metropolis,et al.
Equation of State Calculations by Fast Computing Machines
,
1953,
Resonance.
[2]
J. Schwartz,et al.
Linear Operators. Part I: General Theory.
,
1960
.
[3]
W. K. Hastings,et al.
Monte Carlo Sampling Methods Using Markov Chains and Their Applications
,
1970
.
[4]
P. Peskun,et al.
Optimum Monte-Carlo sampling using Markov chains
,
1973
.
[5]
John G. Kemeny,et al.
Finite Markov chains
,
1960
.
[6]
Donald Geman,et al.
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
,
1984,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[7]
E. Nummelin.
General irreducible Markov chains and non-negative operators: List of symbols and notation
,
1984
.
[8]
E. Nummelin.
General irreducible Markov chains and non-negative operators: Preface
,
1984
.
[9]
W. Wong,et al.
The calculation of posterior distributions by data augmentation
,
1987
.
[10]
Adrian F. M. Smith,et al.
Sampling-Based Approaches to Calculating Marginal Densities
,
1990
.
[11]
L. Tierney.
Exploring Posterior Distributions Using Markov Chains
,
1992
.