Identifying effective policies in approximate dynamic programming: Beyond regression

Dynamic programming formulations may be used to solve for optimal policies in Markov decision processes. Due to computational complexity dynamic programs must often be solved approximately. We consider the case of a tunable approximation architecture used in lieu of computing true value functions. The standard methodology advocates tuning the approximation architecture via sample path information and regression to get a good fit to the true value function. We provide an example which shows that this approach may unnecessarily lead to poorly performing policies and suggest direct search methods to find better performing value function approximations. We illustrate this concept with an application from ambulance redeployment.

[1]  Matthew S. Maxwell,et al.  Approximate Dynamic Programming for Ambulance Redeployment , 2010, INFORMS J. Comput..

[2]  Warren B. Powell,et al.  Approximate Dynamic Programming - Solving the Curses of Dimensionality , 2007 .

[3]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[4]  Dimitri P. Bertsekas,et al.  Stochastic optimal control : the discrete time case , 2007 .

[5]  Panos M. Pardalos,et al.  Approximate dynamic programming: solving the curses of dimensionality , 2009, Optim. Methods Softw..

[6]  András Lörincz,et al.  Learning Tetris Using the Noisy Cross-Entropy Method , 2006, Neural Computation.

[7]  J. S. Ivey,et al.  Nelder-Mead simplex modifications for simulation optimization , 1996 .

[8]  Matthew S. Maxwell,et al.  Ambulance redeployment: An approximate dynamic programming approach , 2009, Proceedings of the 2009 Winter Simulation Conference (WSC).

[9]  Erhan Erkut,et al.  Ambulance location for maximum survival , 2008 .

[10]  Justin A. Boyan,et al.  Technical Update: Least-Squares Temporal Difference Learning , 2002, Machine Learning.

[11]  Michel Gendreau,et al.  A dynamic model and parallel tabu search heuristic for real-time ambulance relocation , 2001, Parallel Comput..

[12]  Elise Miller-Hooks,et al.  Evaluation of Relocation Strategies for Emergency Medical Service Vehicles , 2009 .

[13]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[14]  Warren B. Powell,et al.  Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics) , 2007 .

[15]  John N. Tsitsiklis,et al.  Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.

[16]  Richard S. Sutton,et al.  Learning to predict by the methods of temporal differences , 1988, Machine Learning.

[17]  R BartonRussell,et al.  Nelder-Mead Simplex Modifications for Simulation Optimization , 1996 .