Decayed MCMC Filtering

Filtering---estimating the state of a partially observable Markov process from a sequence of observations---is one of the most widely studied problems in control theory, AI, and computational statistics. Exact computation of the posterior distribution is generally intractable for large discrete systems and for nonlinear continuous systems, so a good deal of effort has gone into developing robust approximation algorithms. This paper describes a simple stochastic approximation algorithm for filtering called {em decayed MCMC}. The algorithm applies Markov chain Monte Carlo sampling to the space of state trajectories using a proposal distribution that favours flips of more recent state variables. The formal analysis of the algorithm involves a generalization of standard coupling arguments for MCMC convergence. We prove that for any ergodic underlying Markov process, the convergence time of decayed MCMC with inverse-polynomial decay remains bounded as the length of the observation sequence grows. We show experimentally that decayed MCMC is at least competitive with other approximation algorithms such as particle filtering.

[1]  Editors , 1986, Brain Research Bulletin.

[2]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[3]  A. Hasman,et al.  Probabilistic reasoning in intelligent systems: Networks of plausible inference , 1991 .

[4]  P. Diaconis,et al.  LOGARITHMIC SOBOLEV INEQUALITIES FOR FINITE MARKOV CHAINS , 1996 .

[5]  Peter Green,et al.  Markov chain Monte Carlo in Practice , 1996 .

[6]  Dorit S. Hochbaum,et al.  Approximation Algorithms for NP-Hard Problems , 1996 .

[7]  Brian D. O. Anderson,et al.  Exponential stability of filters and smoothers for Hidden Markov Models , 1997, 1997 European Control Conference (ECC).

[8]  Martin E. Dyer,et al.  Path coupling: A technique for proving rapid mixing in Markov chains , 1997, Proceedings 38th Annual Symposium on Foundations of Computer Science.

[9]  Xavier Boyen,et al.  Tractable Inference for Complex Stochastic Processes , 1998, UAI.

[10]  Dana Randall,et al.  Analyzing Glauber Dynamics by Comparison of Markov Chains , 1998, LATIN.

[11]  Kevin Murphy,et al.  Bayes net toolbox for Matlab , 1999 .

[12]  F. Martinelli Lectures on Glauber dynamics for discrete spin models , 1999 .

[13]  W. Freeman,et al.  Generalized Belief Propagation , 2000, NIPS.

[14]  Tom Minka,et al.  Expectation Propagation for approximate Bayesian inference , 2001, UAI.

[15]  Kevin P. Murphy,et al.  The Factored Frontier Algorithm for Approximate Inference in DBNs , 2001, UAI.

[16]  Koby Crammer,et al.  Advances in Neural Information Processing Systems 14 , 2002 .

[17]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[18]  Michael I. Jordan,et al.  Factorial Hidden Markov Models , 1995, Machine Learning.