Unbiased estimation of the gradient of the log-likelihood in inverse problems

We consider the problem of estimating a parameter associated to a Bayesian inverse problem. Treating the unknown initial condition as a nuisance parameter, typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm provides the possibility for parallel computation on arbitrarily many processors without any loss of efficiency, asymptotically. In practice, this means any precision can be achieved in a fixed, finite constant time, provided that enough processors are available.

[1]  Andrew M. Stuart,et al.  Inverse problems: A Bayesian perspective , 2010, Acta Numerica.

[2]  L. R. Scott,et al.  The Mathematical Theory of Finite Element Methods , 1994 .

[3]  G. Roberts,et al.  Unbiased Monte Carlo: Posterior estimation for intractable/infinite-dimensional models , 2014, Bernoulli.

[4]  Eric Moulines,et al.  Inference in hidden Markov models , 2010, Springer series in statistics.

[5]  Joel Franklin,et al.  Well-posed stochastic extensions of ill-posed linear problems☆ , 1970 .

[6]  Arnaud Doucet,et al.  Asymptotic Properties of Recursive Particle Maximum Likelihood Estimation , 2019, 2019 IEEE International Symposium on Information Theory (ISIT).

[7]  A. Doucet,et al.  Asymptotic Properties of Recursive Maximum Likelihood Estimation in Non-Linear State-Space Models , 2018, 1806.09571.

[8]  A. N. Tikhonov,et al.  The approximate solution of Fredholm integral equations of the first kind , 1964 .

[9]  Pierre Priouret,et al.  Adaptive Algorithms and Stochastic Approximations , 1990, Applications of Mathematics.

[10]  Andrew M. Stuart,et al.  Uncertainty Quantification and Weak Approximation of an Elliptic Inverse Problem , 2011, SIAM J. Numer. Anal..

[11]  Kody J. H. Law,et al.  Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions , 2017 .

[12]  H. Engl,et al.  Regularization of Inverse Problems , 1996 .

[13]  P. Moral Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications , 2004 .

[14]  A. Beskos,et al.  Multilevel sequential Monte Carlo samplers , 2015, 1503.07259.

[15]  Pierre Del Moral,et al.  Mean Field Simulation for Monte Carlo Integration , 2013 .

[16]  Philippe G. Ciarlet,et al.  The finite element method for elliptic problems , 2002, Classics in applied mathematics.

[17]  P. Moral,et al.  Sequential Monte Carlo samplers , 2002, cond-mat/0212648.

[18]  Ajay Jasra,et al.  Unbiased filtering of a class of partially observed diffusions , 2020, Advances in Applied Probability.

[19]  Lea Fleischer,et al.  Regularization of Inverse Problems , 1996 .

[20]  Don McLeish,et al.  A general method for debiasing a Monte Carlo estimator , 2010, Monte Carlo Methods Appl..

[21]  Peter W. Glynn,et al.  Unbiased Estimation with Square Root Convergence for SDE Models , 2015, Oper. Res..

[22]  Matti Vihola,et al.  Unbiased Estimators and Multilevel Monte Carlo , 2015, Oper. Res..