Explaining the Space of Plans through Plan-Property Dependencies

A key problem in explainable AI planning is to elucidate decision rationales. User questions in this context are often contrastive, taking the form “Why do A rather than B?”. Answering such a question requires a statement about the space of possible plans. We propose to do so through plan-property dependencies, where plan properties are Boolean properties of plans the user is interested in, and dependencies are entailment relations in plan space. The answer to the above question then consists of those properties C entailed by B. We introduce a formal framework for such dependency analysis. We instantiate and operationalize that framework for the case of dependencies between goals in oversubscription planning. More powerful plan properties can be compiled into that special case. We show experimentally that, in a variety of benchmarks, the suggested analyses can be feasible and produce compact answers for human inspection.

[1]  Tim Miller,et al.  Contrastive explanation: a structural-model approach , 2018, The Knowledge Engineering Review.

[2]  J. Christopher Beck,et al.  itSIMPLE: towards an integrated design system for real planning applications , 2013, The Knowledge Engineering Review.

[3]  Bernhard Nebel,et al.  COMPLEXITY RESULTS FOR SAS+ PLANNING , 1995, Comput. Intell..

[4]  Hector Geffner,et al.  Compiling Uncertainty Away in Conformant Planning Problems with Bounded Width , 2009, J. Artif. Intell. Res..

[5]  David E. Smith Planning as an Iterative Process , 2012, AAAI.

[6]  Craig A. Knoblock,et al.  Combining the Expressivity of UCPOP with the Efficiency of Graphplan , 1997, ECP.

[7]  Pat Langley,et al.  Explainable Agency for Intelligent Autonomous Systems , 2017, AAAI.

[8]  Malte Helmert,et al.  Concise finite-domain representations for PDDL planning tasks , 2009, Artif. Intell..

[9]  Jörg Hoffmann,et al.  Search and Learn: On Dead-End Detectors, the Traps they Set, and Trap Learning , 2017, IJCAI.

[10]  David E. Smith Choosing Objectives in Over-Subscription Planning , 2004, ICAPS.

[11]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[12]  Jorge A. Baier,et al.  A Heuristic Search Approach to Planning with Temporally Extended Preferences , 2007, IJCAI.

[13]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[14]  Bernhard Nebel,et al.  The FF Planning System: Fast Plan Generation Through Heuristic Search , 2011, J. Artif. Intell. Res..

[15]  Cheng Fang,et al.  Resolving Over-Constrained Temporal Problems with Uncertainty through Conflict-Directed Relaxation , 2017, J. Artif. Intell. Res..

[16]  Bernhard Nebel,et al.  On the Compilability and Expressive Power of Propositional Planning Formalisms , 2000, J. Artif. Intell. Res..

[17]  Jörg Hoffmann,et al.  Resource-Constrained Planning: A Monte Carlo Random Walk Approach , 2012, ICAPS.

[18]  Stefan Edelkamp,et al.  On the Compilation of Plan Constraints and Preferences , 2006, ICAPS.

[19]  Maria Fox,et al.  The Automatic Inference of State Invariants in TIM , 1998, J. Artif. Intell. Res..

[20]  Carmel Domshlak,et al.  Deterministic Oversubscription Planning as Heuristic Search: Abstractions and Reformulations , 2015, J. Artif. Intell. Res..

[21]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[22]  Jörg Hoffmann,et al.  State space search nogood learning: Online refinement of critical-path dead-end detectors in planning , 2017, Artif. Intell..

[23]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[24]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.