Stability of Stochastic Approximations With “Controlled Markov” Noise and Temporal Difference Learning

We are interested in understanding stability (almost sure boundedness) of stochastic approximation algorithms (SAs) driven by a “controlled Markov” process. Analyzing this class of algorithms is important, since many reinforcement learning (RL) algorithms can be cast as SAs driven by a “controlled Markov” process. In this paper, we present easily verifiable sufficient conditions for stability and convergence of SAs driven by a “controlled Markov” process. Many RL applications involve continuous state spaces. While our analysis readily ensures stability for such continuous state applications, traditional analyses do not. As compared to literature, our analysis presents a two-fold generalization: 1) the Markov process may evolve in a continuous state space and 2) the process need not be ergodic under any given stationary policy. Temporal difference (TD) learning is an important policy evaluation method in RL. The theory developed herein, is used to analyze generalized $\text{TD}(0)$, an important variant of TD. Our theory is also used to analyze a TD formulation of supervised learning for forecasting problems.

[1]  J. Aubin,et al.  Differential inclusions set-valued maps and viability theory , 1984 .

[2]  Pierre Priouret,et al.  Adaptive Algorithms and Stochastic Approximations , 1990, Applications of Mathematics.

[3]  M. Benaïm A Dynamical System Approach to Stochastic Approximations , 1996 .

[4]  John N. Tsitsiklis,et al.  Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.

[5]  M. Hirsch,et al.  Asymptotic pseudotrajectories and chain recurrent flows, with applications , 1996 .

[6]  John N. Tsitsiklis,et al.  Analysis of temporal-difference learning with function approximation , 1996, NIPS 1996.

[7]  M. Benaïm Dynamics of stochastic approximation algorithms , 1999 .

[8]  Sean P. Meyn,et al.  The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning , 2000, SIAM J. Control. Optim..

[9]  Tamer Basar,et al.  Analysis of Recursive Stochastic Algorithms , 2001 .

[10]  Zdzisław Denkowski,et al.  Set-Valued Analysis , 2021 .

[11]  James C. Spall,et al.  Introduction to stochastic search and optimization - estimation, simulation, and control , 2003, Wiley-Interscience series in discrete mathematics and optimization.

[12]  Josef Hofbauer,et al.  Stochastic Approximations and Differential Inclusions , 2005, SIAM J. Control. Optim..

[13]  Vivek S. Borkar,et al.  Stochastic approximation with 'controlled Markov' noise , 2006, Systems & control letters (Print).

[14]  James C. Spall,et al.  Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control (Spall, J.C. , 2007 .

[15]  V. Borkar Stochastic Approximation: A Dynamical Systems Viewpoint , 2008 .

[16]  C. Andrieu,et al.  On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic , 2012, 1205.4181.

[17]  Shalabh Bhatnagar,et al.  A Generalization of the Borkar-Meyn Theorem for Stochastic Recursive Inclusions , 2015, Math. Oper. Res..

[18]  Shalabh Bhatnagar,et al.  Analysis of Gradient Descent Methods With Nondiminishing Bounded Errors , 2016, IEEE Transactions on Automatic Control.