Stability of optimal filter higher-order derivatives

In many scenarios, a state-space model depends on a parameter which needs to be inferred from data. Using stochastic gradient search and the optimal filter (first-order) derivative, the parameter can be estimated online. To analyze the asymptotic behavior of online methods for parameter estimation in non-linear state-space models, it is necessary to establish results on the existence and stability of the optimal filter higher-order derivatives. The existence and stability properties of these derivatives are studied here. We show that the optimal filter higher-order derivatives exist and forget initial conditions exponentially fast. We also show that the optimal filter higher-order derivatives are geometrically ergodic. The obtained results hold under (relatively) mild conditions and apply to state-space models met in practice.

[1]  P. McCullagh Tensor Methods in Statistics , 1987 .

[2]  A. Doucet,et al.  Exponential forgetting and geometric ergodicity for optimal filtering in general state-space models , 2005 .

[3]  Haikady N. Nagaraja,et al.  Inference in Hidden Markov Models , 2006, Technometrics.

[4]  V B Tadić,et al.  Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models , 2009, IEEE Transactions on Information Theory.

[5]  F. Gland,et al.  STABILITY AND UNIFORM APPROXIMATION OF NONLINEAR FILTERS USING THE HILBERT METRIC AND APPLICATION TO PARTICLE FILTERS1 , 2004 .

[6]  Eric Moulines,et al.  Inference in hidden Markov models , 2010, Springer series in statistics.

[7]  Richard L. Tweedie,et al.  Markov Chains and Stochastic Stability , 1993, Communications and Control Engineering Series.

[8]  R. Douc,et al.  Asymptotics of the maximum likelihood estimator for general hidden Markov models , 2001 .

[9]  R. Douc,et al.  Asymptotic properties of the maximum likelihood estimator in autoregressive models with Markov regime , 2004, math/0503681.

[10]  L. Gerencsér,et al.  Recursive estimation of Hidden Markov Models , 2005, Proceedings of the 44th IEEE Conference on Decision and Control.

[11]  Laurent Mevel,et al.  Exponential Forgetting and Geometric Ergodicity in Hidden Markov Models , 2000, Math. Control. Signals Syst..

[12]  Arnaud Doucet,et al.  Uniform Stability of a Particle Approximation of the Optimal Filter Derivative , 2011, SIAM J. Control. Optim..

[13]  A. Doucet,et al.  Asymptotic bias of stochastic gradient search , 2017 .

[14]  B. Rozovskii,et al.  The Oxford Handbook of Nonlinear Filtering , 2011 .

[15]  A. Doucet,et al.  Asymptotic Properties of Recursive Maximum Likelihood Estimation in Non-Linear State-Space Models , 2018, 1806.09571.

[16]  Randal Douc,et al.  Nonlinear Time Series: Theory, Methods and Applications with R Examples , 2014 .

[17]  Sumeetpal S. Singh,et al.  Particle approximations of the score and observed information matrix in state space models with application to parameter estimation , 2011 .

[18]  T. Rydén On recursive estimation for hidden Markov models , 1997 .

[19]  F. LeGland,et al.  Recursive estimation in hidden Markov models , 1997, Proceedings of the 36th IEEE Conference on Decision and Control.

[20]  P. Bickel,et al.  Asymptotic normality of the maximum-likelihood estimator for general hidden Markov models , 1998 .

[21]  H. P. Annales de l'Institut Henri Poincaré , 1931, Nature.

[22]  R. Atar,et al.  Exponential stability for nonlinear filtering , 1997 .

[23]  Masanobu Taniguchi,et al.  Asymptotic Theory of Statistical Inference for Time Series , 2000 .

[24]  P. Moral,et al.  On the stability of interacting processes with applications to filtering and genetic algorithms , 2001 .

[25]  Gary Ebbs Analyticity , 2019, The Cambridge History of Philosophy, 1945–2015.