Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation

Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n 2 iterations. Such random walks can sometimes be suppressed using “overrelaxed” variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.

[1]  N. Metropolis,et al.  Equation of State Calculations by Fast Computing Machines , 1953, Resonance.

[2]  W. K. Hastings,et al.  Monte Carlo Sampling Methods Using Markov Chains and Their Applications , 1970 .

[3]  Louis A. Hageman,et al.  Iterative Solution of Large Linear Systems. , 1971 .

[4]  S. Adler Over-relaxation method for the Monte Carlo evaluation of the partition function for multiquadratic actions , 1981 .

[5]  C. Whitmer Over-relaxation methods for Monte Carlo simulations of quadratic and multiquadratic actions , 1984 .

[6]  L. Devroye Non-Uniform Random Variate Generation , 1986 .

[7]  S. Duane,et al.  Hybrid Monte Carlo , 1987 .

[8]  Brown,et al.  Overrelaxed heat-bath and Metropolis algorithms for accelerating pure gauge Monte Carlo calculations. , 1987, Physical review letters.

[9]  Creutz,et al.  Overrelaxation and Monte Carlo simulation. , 1987, Physical review. D, Particles and fields.

[10]  Adrian F. M. Smith,et al.  Bayesian computation via the gibbs sampler and related markov chain monte carlo methods (with discus , 1993 .

[11]  D. Toussaint Introduction to algorithms for Monte Carlo simulations and their application to QCD , 1989 .

[12]  Adrian F. M. Smith,et al.  Sampling-Based Approaches to Calculating Marginal Densities , 1990 .

[13]  P. Barone,et al.  Improving Stochastic Relaxation for Gussian Random Fields , 1990, Probability in the Engineering and Informational Sciences.

[14]  P. Green,et al.  Metropolis Methods, Gaussian Proposals and Antithetic Variables , 1992 .

[15]  Ulli Wolf Dynamics of hybrid overrelaxation in the gaussian model , 1992 .

[16]  Z. Fodor,et al.  Overrelaxation algorithm for coupled gauge-Higgs systems , 1994, hep-lat/9403024.

[17]  Geoffrey E. Hinton,et al.  Bayesian Learning for Neural Networks , 1995 .