Metareasoning for Planning Under Uncertainty

The conventional model for online planning under uncertainty assumes that an agent can stop and plan without incurring costs for the time spent planning. However, planning time is not free in most real-world settings. For example, an autonomous drone is subject to nature's forces, like gravity, even while it thinks, and must either pay a price for counteracting these forces to stay in place, or grapple with the state change caused by acquiescing to them. Policy optimization in these settings requires metareasoning-- a process that trades off the cost of planning and the potential policy improvement that can be achieved. We formalize and analyze the metareasoning problem for Markov Decision Processes (MDPs). Our work subsumes previously studied special cases of metareasoning and shows that in the general case, metareasoning is at most polynomially harder than solving MDPs with any given algorithm that disregards the cost of thinking. For reasons we discuss, optimal general metareasoning turns out to be impractical, motivating approximations. We present approximate metareasoning procedures which rely on special properties of the BRTDP planning algorithm and explore the effectiveness of our methods on a variety of problems.

[1]  John S. Breese,et al.  Ideal Partition of Resources for Metareasoning , 2021, ArXiv.

[2]  R. Bellman,et al.  Dynamic Programming and Markov Processes , 1960 .

[3]  Stuart J. Russell,et al.  Principles of Metareasoning , 1989, Artif. Intell..

[4]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[5]  Geoffrey J. Gordon,et al.  Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees , 2005, ICML.

[6]  Eric Horvitz,et al.  Reasoning about beliefs and actions under computational resource constraints , 1987, Int. J. Approx. Reason..

[7]  Shlomo Zilberstein,et al.  Anytime Sensing Planning and Action: A Practical Model for Robot Control , 1993, IJCAI.

[8]  Andrew G. Barto,et al.  Learning to Act Using Real-Time Dynamic Programming , 1995, Artif. Intell..

[9]  Thomas Keller,et al.  Better Be Lucky than Good: Exceeding Expectations in MDP Evaluation , 2015, AAAI.

[10]  Eric Horvitz,et al.  Ideal reformulation of belief networks , 1990, UAI.

[11]  Mausam,et al.  LRTDP Versus UCT for Online Probabilistic Planning , 2012, AAAI.

[12]  M. Pollack Journal of Artificial Intelligence Research: Preface , 2001 .

[13]  Eric Horvitz,et al.  Principles and applications of continual computation , 2001, Artif. Intell..

[14]  Eric Horvitz,et al.  Dynamic restart policies , 2002, AAAI/IAAI.

[15]  Shlomo Zilberstein,et al.  Monitoring and control of anytime algorithms: A dynamic programming approach , 2001, Artif. Intell..

[16]  Charles Lesire,et al.  A Robotic Execution Framework for Online Probabilistic (Re)Planning , 2014, ICAPS.

[17]  Shlomo Zilberstein,et al.  Optimal Composition of Real-Time Systems , 1996, Artif. Intell..

[18]  David Tolpin,et al.  Selecting Computations: Theory and Applications , 2012, UAI.

[19]  Wheeler Ruml,et al.  Heuristic Search When Time Matters , 2013, J. Artif. Intell. Res..

[20]  Leslie Pack Kaelbling,et al.  Planning under Time Constraints in Stochastic Domains , 1993, Artif. Intell..

[21]  Eric Horvitz,et al.  Reasoning, Metareasoning, and Mathematical Truth: Studies of Theorem Proving under Limited Resources , 1995, UAI.

[22]  Eric Horvitz,et al.  Reflection and Action Under Scarce Resources: Theoretical Principles and Empirical Study , 1989, IJCAI.

[23]  David Maxwell Chickering,et al.  A Bayesian Approach to Tackling Hard Computational Problems (Preliminary Report) , 2001, Electron. Notes Discret. Math..

[24]  Nils J. Nilsson,et al.  Artificial Intelligence , 1974, IFIP Congress.

[25]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[26]  John N. Tsitsiklis,et al.  Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.