Model-based contrastive explanations for explainable planning

An important type of question that arises in Explainable Planning is a contrastive question, of the form “Why action A instead of action B?”. These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. An effective explanation of this type serves to highlight the differences between the decisions that have been made by the planner and what the user would expect, as well as to provide further insight into the model and the planning process. Producing this kind of explanation requires the generation of the contrastive plan. This paper introduces domain-independent compilations of user questions into constraints. These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan. We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting.

[1]  Frank E. Ritter,et al.  Designs for explaining intelligent agents , 2009, Int. J. Hum. Comput. Stud..

[2]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[3]  Maria Fox,et al.  PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains , 2003, J. Artif. Intell. Res..

[4]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[5]  Daniele Magazzeni,et al.  Towards Providing Explanations for AI Planner Decisions , 2018, IJCAI 2018.

[6]  William J. Clancey,et al.  Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.

[7]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Tim Miller,et al.  Contrastive explanation: a structural-model approach , 2018, The Knowledge Engineering Review.

[9]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.

[10]  Anind K. Dey,et al.  Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.

[11]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[12]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[13]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[14]  David E. Smith Planning as an Iterative Process , 2012, AAAI.

[15]  Subbarao Kambhampati,et al.  Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior , 2018, ICAPS.