A key limitation of sampling algorithms for approximate inference is that it is difficult to quantify their approximation error. Widely used sampling schemes, such as sequential importance sampling with resampling and Metropolis-Hastings, produce output samples drawn from a distribution that may be far from the target posterior distribution. This paper shows how to upper-bound the symmetric KL divergence between the output distribution of a broad class of sequential Monte Carlo (SMC) samplers and their target posterior distributions, subject to assumptions about the accuracy of a separate gold-standard sampler. The proposed method applies to samplers that combine multiple particles, multinomial resampling, and rejuvenation kernels. The experiments show the technique being used to estimate bounds on the divergence of SMC samplers for posterior inference in a Bayesian linear regression model and a Dirichlet process mixture model.
[1]
Roger Baker Grosse,et al.
Model selection in compositional spaces
,
2014
.
[2]
William P. Reinhardt,et al.
A finite‐time variational method for determining optimal paths and obtaining bounds on free energy changes from computer simulations
,
1993
.
[3]
Max Welling,et al.
Markov Chain Monte Carlo and Variational Inference: Bridging the Gap
,
2014,
ICML.
[4]
Roger B. Grosse,et al.
Measuring the reliability of MCMC inference with bidirectional Monte Carlo
,
2016,
NIPS.
[5]
Ryan P. Adams,et al.
Sandwiching the marginal likelihood using bidirectional Monte Carlo
,
2015,
ArXiv.
[6]
A. Doucet,et al.
Particle Markov chain Monte Carlo methods
,
2010
.
[7]
Vikash K. Mansinghka,et al.
Quantifying the probable approximation error of probabilistic inference programs
,
2016,
ArXiv.
[8]
P. Moral,et al.
Sequential Monte Carlo samplers
,
2002,
cond-mat/0212648.