Dynamic Programming Deconstructed: Transformations of the Bellman Equation and Computational Efficiency

Some approaches to solving challenging dynamic programming problems, such as Q-learning, begin by transforming the Bellman equation into an alternative functional equation, in order to open up a new line of attack. Our paper studies this idea systematically, with a focus on boosting computational efficiency. We provide a characterization of the set of valid transformations of the Bellman equation, where validity means that the transformed Bellman equation maintains the link to optimality held by the original Bellman equation. We then examine the solutions of the transformed Bellman equations and analyze correspondingly transformed versions of the algorithms used to solve for optimal policies. These investigations yield new approaches to a variety of discrete time dynamic programming problems, including those with features such as recursive preferences or desire for robustness. Increased computational efficiency is demonstrated via time complexity arguments and numerical experiments.

[1]  P. Moerbeke On optimal stopping and free boundary problems , 1973, Advances in Applied Probability.

[2]  J. Robin,et al.  Tenure, Experience, Human Capital, and Wages: A Tractable Equilibrium Search Model of Wage Dynamics , 2014 .

[3]  John Rust Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher , 1987 .

[4]  Gaetano Bloise,et al.  Convex dynamic programming with (bounded) recursive utility , 2018, J. Econ. Theory.

[5]  Jong-Myun Moon,et al.  Solving dynamic discrete choice models using smoothing and sieve methods , 2019, Journal of Econometrics.

[6]  Dimitri P. Bertsekas,et al.  Q-learning and enhanced policy iteration in discounted dynamic programming , 2010, 49th IEEE Conference on Decision and Control (CDC).

[7]  Panos M. Pardalos,et al.  Approximate dynamic programming: solving the curses of dimensionality , 2009, Optim. Methods Softw..

[8]  Massimo Marinacci,et al.  Unique solutions for stochastic recursive utilities , 2010, J. Econ. Theory.

[9]  Ryan P. Kellogg The Effect of Uncertainty on Investment: Evidence from Texas Oil Drilling , 2010 .

[10]  Csaba Szepesvári,et al.  Finite-Time Bounds for Fitted Value Iteration , 2008, J. Mach. Learn. Res..

[11]  E. Prescott,et al.  Investment Under Uncertainty , 1971 .

[12]  George Tauchen,et al.  Finite state markov-chain approximations to univariate and vector autoregressions , 1986 .

[13]  Igor Livshits,et al.  Consumer Bankruptcy: A Fresh Start , 2007 .

[14]  George E. Monahan,et al.  Optimal Stopping in a Partially Observable Markov Process with Costly Information , 1980, Oper. Res..

[15]  Garud Iyengar,et al.  Robust Dynamic Programming , 2005, Math. Oper. Res..

[16]  P. Schrimpf,et al.  Dynamic Programming , 2011 .

[17]  Anna Jaskiewicz,et al.  Stochastic optimal growth model with risk sensitive preferences , 2015, J. Econ. Theory.

[18]  Andrzej Ruszczynski,et al.  Risk-averse dynamic programming for Markov decision processes , 2010, Math. Program..

[19]  Martin L. Puterman,et al.  Action Elimination Procedures for Modified Policy Iteration Algorithms , 1982, Oper. Res..

[20]  Jonathan P. How,et al.  Decision Making Under Uncertainty: Theory and Application , 2015 .

[21]  John Rust Structural estimation of markov decision processes , 1986 .

[22]  J. McCall Economics of Information and Job Search , 1970 .

[23]  R. Bidder,et al.  Robust Animal Spirits , 2012 .

[24]  Costas Meghir,et al.  Wage Risk and Employment Risk over the Life Cycle , 2006, SSRN Electronic Journal.