Maximum a Posteriori Sequence Estimation Using Monte Carlo Particle Filters

We develop methods for performing maximum a posteriori (MAP) sequence estimation in non-linear non-Gaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence estimation is then performed using a classical dynamic programming technique applied to the discretised version of the state space. In contrast with standard approaches to the problem which essentially compare only the trajectories generated directly during the filtering stage, our method efficiently computes the optimal trajectory over all combinations of the filtered states. A particular strength of the method is that MAP sequence estimation is performed sequentially in one single forwards pass through the data without the requirement of an additional backward sweep. An application to estimation of a non-linear time series model and to spectral estimation for time-varying autoregressions is described.

[1]  Andrew J. Viterbi,et al.  Error bounds for convolutional codes and an asymptotically optimum decoding algorithm , 1967, IEEE Trans. Inf. Theory.

[2]  L. R. Rabiner,et al.  An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition , 1983, The Bell System Technical Journal.

[3]  J. Mendel,et al.  Maximum-Likelihood Deconvolution: A Journey into Model-Based Signal Processing , 1990 .

[4]  N. Gordon,et al.  Novel approach to nonlinear/non-Gaussian Bayesian state estimation , 1993 .

[5]  Yakov Bar-Shalom,et al.  Multitarget-Multisensor Tracking: Principles and Techniques , 1995 .

[6]  G. Kitagawa Monte Carlo Filter and Smoother for Non-Gaussian Nonlinear State Space Models , 1996 .

[7]  Simon J. Godsill,et al.  Bayesian Enhancement of Speech and Audio Signals which can be Modelled as ARMA Processes , 1997 .

[8]  T. Higuchi Monte carlo filter using the genetic algorithm operators , 1997 .

[9]  Jun S. Liu,et al.  Sequential Monte Carlo methods for dynamic systems , 1997 .

[10]  Simon J. Godsill,et al.  Statistical reconstruction and analysis of autoregressive signals in impulsive noise using the Gibbs sampler , 1998, IEEE Trans. Speech Audio Process..

[11]  G. Peters,et al.  Monte Carlo Approximations for General State-Space Models , 1998 .

[12]  Peter J. W. Rayner,et al.  Digital Audio Restoration: A Statistical Model Based Approach , 1998 .

[13]  M. Pitt,et al.  Filtering via Simulation: Auxiliary Particle Filters , 1999 .

[14]  Michael A. West,et al.  Evaluation and Comparison of EEG Traces: Latent Structure in Nonstationary Time Series , 1999 .

[15]  Simon J. Godsill,et al.  On-line Bayesian modelling and enhancement of speech signals , 2000 .

[16]  Simon J. Godsill,et al.  Methodology for Monte Carlo smoothing with application to time-varying autoregressions , 2000 .

[17]  M. West,et al.  Bayesian dynamic factor models and variance matrix dis-counting for portfolio allocation , 2000 .

[18]  Simon J. Godsill,et al.  On sequential Monte Carlo sampling methods for Bayesian filtering , 2000, Stat. Comput..

[19]  Simon J. Godsill,et al.  Monte Carlo filtering and smoothing with application to time-varying spectral estimation , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[20]  Simon J. Godsill,et al.  Improvement Strategies for Monte Carlo Particle Filters , 2001, Sequential Monte Carlo Methods in Practice.

[21]  Michael A. West,et al.  Combined Parameter and State Estimation in Simulation-Based Filtering , 2001, Sequential Monte Carlo Methods in Practice.

[22]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .