Planning and Acting in Incomplete Domains

Engineering complete planning domain descriptions is often very costly because of human error or lack of domain knowledge. Learning complete domain descriptions is also very challenging because many features are irrelevant to achieving the goals and data may be scarce. We present a planner and agent that respectively plan and act in incomplete domains by i) synthesizing plans to avoid execution failure due to ignorance of the domain model, and ii) passively learning about the domain model during execution to improve later re-planning attempts. Our planner DeFault is the first to reason about a domain's incompleteness to avoid potential plan failure. DeFault computes failure explanations for each action and state in the plan and counts the number of interpretations of the incomplete domain where failure will occur. We show that DeFault performs best by counting prime implicants (failure diagnoses) rather than propositional models. Our agent Goalie learns about the preconditions and effects of incompletely-specified actions while monitoring its state and, in conjunction with DeFault plan failure explanations, can diagnose past and future action failures. We show that by reasoning about incompleteness (as opposed to ignoring it) Goalie fails and re-plans less and executes fewer actions.

[1]  Brian C. Williams,et al.  Diagnosing Multiple Faults , 1987, Artif. Intell..

[2]  Subbarao Kambhampati,et al.  Model-lite Planning for the Web Age Masses: The Challenges of Planning with Incomplete and Evolving Domain Models , 2007, AAAI.

[3]  Pierre Marquis,et al.  A Knowledge Compilation Map , 2002, J. Artif. Intell. Res..

[4]  Alex M. Andrew,et al.  ROBOT LEARNING, edited by Jonathan H. Connell and Sridhar Mahadevan, Kluwer, Boston, 1993/1997, xii+240 pp., ISBN 0-7923-9365-1 (Hardback, 218.00 Guilders, $120.00, £89.95). , 1999, Robotica (Cambridge. Print).

[5]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[6]  Dan Roth,et al.  On the Hardness of Approximate Reasoning , 1993, IJCAI.

[7]  Eyal Amir,et al.  Goal Achievement in Partially Known, Partially Observable Domains , 2006, ICAPS.

[8]  Daniel Bryce,et al.  Sequential Monte Carlo in reachability heuristics for probabilistic planning , 2008, Artif. Intell..

[9]  Aniruddha Datta,et al.  BIOINFORMATICS ORIGINAL PAPER Systems biology Intervention in a family of Boolean networks , 2022 .

[10]  Daniel Bryce,et al.  Sequential Monte Carlo in Probabilistic Planning Reachability Heuristics , 2006, ICAPS.

[11]  Daniel Bryce,et al.  MABLE: a framework for learning from natural instruction , 2009, AAMAS.

[12]  Andrew Garland,et al.  Plan evaluation with incomplete action descriptions , 2002, AAAI/IAAI.

[13]  Laurent El Ghaoui,et al.  Robust Control of Markov Decision Processes with Uncertain Transition Matrices , 2005, Oper. Res..

[14]  Qiang Yang,et al.  ARMS: an automatic knowledge engineering tool for learning action models for AI planning , 2007, The Knowledge Engineering Review.

[15]  Avrim Blum,et al.  Fast Planning Through Planning Graph Analysis , 1995, IJCAI.

[16]  Richard Fikes,et al.  STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving , 1971, IJCAI.

[17]  Carmel Domshlak,et al.  Probabilistic Planning via Heuristic Forward Search and Weighted Model Counting , 2007, J. Artif. Intell. Res..

[18]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[19]  Jared William Robertson Planning in Incomplete Domains , 2012 .

[20]  Bernhard Nebel,et al.  The FF Planning System: Fast Plan Generation Through Heuristic Search , 2011, J. Artif. Intell. Res..

[21]  M. Rehm,et al.  Proceedings of AAMAS , 2005 .

[22]  Sofia Cassel,et al.  Graph-Based Algorithms for Boolean Function Manipulation , 2012 .