Exploration bonuses and dual control

Finding the Bayesian balance between exploration and exploitation in adaptive optimal control is in general intractable. This paper shows how to compute suboptimal estimates based on a certainty equivalence approximation (Cozzolino, Gonzalez-Zubieta & Miller, 1965) arising from a form of dual control. This system-atizes and extends existing uses of exploration bonuses in reinforcement learning (Sutton, 1990). The approach has two components: a statistical model of uncertainty in the world and a way of turning this into exploratory behavior. This general approach is applied to two-dimensional mazes with moveable barriers and its performance is compared with Sutton's DYNA system.

[1]  H. Simon,et al.  A Behavioral Model of Rational Choice , 1955 .

[2]  H. Simon,et al.  Rational choice and the structure of the environment. , 1956, Psychological review.

[3]  S. Dreyfus Dynamic Programming and the Calculus of Variations , 1960 .

[4]  Ronald A. Howard,et al.  Dynamic Programming and Markov Processes , 1960 .

[5]  L. Meier Combined optimal control and estimation. , 1965 .

[6]  D. Naidu,et al.  Optimal Control Systems , 2018 .

[7]  C. Striebel Sufficient statistics in the optimum control of stochastic systems , 1965 .

[8]  R. Rishel Necessary and Sufficient Dynamic Programming Conditions for Continuous Time Stochastic Optimal Control , 1970 .

[9]  Yaakov Bar-Shalom,et al.  An actively adaptive control for linear systems with random parameters via the dual control approach , 1972, CDC 1972.

[10]  W. J. Studden,et al.  Theory Of Optimal Experiments , 1972 .

[11]  Y. Bar-Shalom,et al.  Wide-sense adaptive dual control for nonlinear stochastic systems , 1973 .

[12]  M. Athans,et al.  Some properties of the dual adaptive stochastic control algorithm , 1981 .

[13]  G. Monahan State of the Art—A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms , 1982 .

[14]  Mitsuo Sato,et al.  Learning control of finite Markov chains with unknown transition probabilities , 1982 .

[15]  Patchigolla Kiran Kumar,et al.  A Survey of Some Results in Stochastic Adaptive Control , 1985 .

[16]  Naresh K. Sinha,et al.  Control Systems , 1986 .

[17]  C. Watkins Learning from delayed rewards , 1989 .

[18]  Richard S. Sutton,et al.  Learning and Sequential Decision Making , 1989 .

[19]  Alan D. Christiansen,et al.  Learning reliable manipulation strategies without initial physical models , 1990, Proceedings., IEEE International Conference on Robotics and Automation.

[20]  Richard S. Sutton,et al.  Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming , 1990, ML.

[21]  Sebastian Thrun,et al.  Active Exploration in Dynamic Environments , 1991, NIPS.

[22]  W. Lovejoy A survey of algorithmic methods for partially observed Markov decision processes , 1991 .

[23]  Sebastian Thrun,et al.  The role of exploration in learning control , 1992 .

[24]  David A. Cohn,et al.  Neural Network Exploration Using Optimal Experiment Design , 1993, NIPS.

[25]  Andrew G. Barto,et al.  Learning to Act Using Real-Time Dynamic Programming , 1995, Artif. Intell..

[26]  Michael L. Littman,et al.  Algorithms for Sequential Decision Making , 1996 .