The Emerging Landscape of Explainable Automated Planning & Decision Making

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years, and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-inthe-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.

[1]  Malte Helmert,et al.  Unsolvability Certificates for Classical Planning , 2017, ICAPS.

[2]  Subbarao Kambhampati,et al.  TLdR: Policy Summarization for Factored SSP Problems Using Temporal Abstractions , 2020, ICAPS.

[3]  John M. Carroll,et al.  Mental Models in Human-Computer Interaction , 1988 .

[4]  Pascal Poupart,et al.  Minimal Sufficient Explanations for Factored Markov Decision Processes , 2009, ICAPS.

[5]  Sailik Sengupta,et al.  RADAR: automated task planning for proactive decision support , 2020, Hum. Comput. Interact..

[6]  Florian Nothdurft,et al.  Plan, Repair, Execute, Explain - How Planning Helps to Assemble your Home Theater , 2014, ICAPS.

[7]  Rachel K. E. Bellamy,et al.  Planning and visualization for a smart meeting room assistant , 2019, AI Commun..

[8]  J. Dessalles,et al.  Reasoning as a lie detection device , 2011, Behavioral and Brain Sciences.

[9]  Subbarao Kambhampati,et al.  Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks , 2019, IJCAI.

[10]  Bradley Hayes,et al.  Improving Robot Controller Transparency Through Autonomous Policy Explanation , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[11]  Martin Wattenberg,et al.  Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.

[12]  Nicholas Mattei,et al.  An English-Language Argumentation Interface for Explanation Generation with Markov Decision Processes in the Domain of Academic Advising , 2013, TIIS.

[13]  S. Kambhampati,et al.  Design for Interpretability , 2019 .

[14]  Alan Fern,et al.  Explainable Reinforcement Learning via Reward Decomposition , 2019 .

[15]  Stylianos Vasileiou A Preliminary Logic-based Approach for Explanation Generation , 2019 .

[16]  Manuela Veloso,et al.  Generation of Policy-Level Explanations for Reinforcement Learning , 2019, AAAI.

[17]  Jorge A. Baier,et al.  Preferred Explanations: Theory and Generation via Planning , 2011, AAAI.

[18]  Jonathan Dodge,et al.  Visualizing and Understanding Atari Agents , 2017, ICML.

[19]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[20]  Subbarao Kambhampati,et al.  What can Automated Planning do for Intelligent Tutoring Systems ? , 2018 .

[21]  Subbarao Kambhampati,et al.  Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation , 2018, ICAPS.

[22]  Michael W. Floyd,et al.  Explaining Rebel Behavior in Goal Reasoning Agents , 2018 .

[23]  Pat Langley,et al.  Explainable Agency for Intelligent Autonomous Systems , 2017, AAAI.

[24]  Daniele Magazzeni,et al.  A New Approach to Plan-Space Explanation: Analyzing Plan-Property Dependencies in Oversubscription Planning , 2020, AAAI.

[25]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[26]  Finale Doshi-Velez,et al.  Exploring Computational User Models for Agent Policy Summarization , 2019, IJCAI.

[27]  Subbarao Kambhampati,et al.  Plan Explanations as Model Reconciliation - An Empirical Study , 2018, ArXiv.

[28]  Yair Zick,et al.  Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).

[29]  James A. Hendler,et al.  HTN Planning: Complexity and Expressivity , 1994, AAAI.

[30]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[31]  R. Kennedy,et al.  Defense Advanced Research Projects Agency (DARPA). Change 1 , 1996 .

[32]  Subbarao Kambhampati,et al.  - D3WA+ - A Case Study of XAIP in a Model Acquisition Task for Dialogue Planning , 2020, ICAPS.

[33]  Tim Miller,et al.  Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.

[34]  Tim Miller,et al.  Model-based contrastive explanations for explainable planning , 2019 .

[35]  Bernhard Nebel,et al.  Coming up With Good Excuses: What to do When no Plan Can be Found , 2010, Cognitive Robotics.

[36]  Subbarao Kambhampati A Classification of Plan Modification Strategies Based on Coverage and Information Requirements , 1990 .

[37]  Subbarao Kambhampati,et al.  Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior , 2018, ICAPS.

[38]  Davide Calvaresi,et al.  Explainable Agents and Robots: Results from a Systematic Literature Review , 2019, AAMAS.

[39]  Alan Fern,et al.  Learning Finite State Representations of Recurrent Policy Networks , 2018, ICLR.

[40]  Pat Langley,et al.  Varieties of Explainable Agency , 2019 .

[41]  Subbarao Kambhampati,et al.  Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning , 2019 .

[42]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[43]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.

[44]  Daniele Magazzeni,et al.  Explainable AI Planning (XAIP): Overview and the Case of Contrastive Explanation (Extended Abstract) , 2019, Reasoning Web.

[45]  Felipe Meneguzzi,et al.  A Tool to Develop Classical Planning Domains and Visualize Heuristic State-Space Search , 2017 .

[46]  Shie Mannor,et al.  Graying the black box: Understanding DQNs , 2016, ICML.

[47]  Samir Aknine,et al.  How explainable plans can make planning faster , 2018 .

[48]  Subbarao Kambhampati,et al.  Model-Free Model Reconciliation , 2019, IJCAI.

[49]  David W. Aha,et al.  DARPA's Explainable Artificial Intelligence (XAI) Program , 2019, AI Mag..

[50]  Subbarao Kambhampati,et al.  Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations , 2018, IJCAI.

[51]  Senka Krivic,et al.  Towards Explainable AI Planning as a Service , 2019, ArXiv.

[52]  Christian Muise,et al.  Bayesian Inference of Linear Temporal Logic Specifications for Contrastive Explanations , 2019, IJCAI.

[53]  David E. Smith Choosing Objectives in Over-Subscription Planning , 2004, ICAPS.

[54]  Stephanie Rosenthal,et al.  Verbalization: Narration of Autonomous Robot Experience , 2016, IJCAI.

[55]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[56]  Niels Ole Bernsen,et al.  MENTAL MODELS IN HUMAN-COMPUTER INTERACTION , 2010 .

[57]  Tim Miller,et al.  Contrastive explanation: a structural-model approach , 2018, The Knowledge Engineering Review.

[58]  Malte Helmert,et al.  A Proof System for Unsolvable Planning Tasks , 2018, ICAPS.

[59]  David Danks,et al.  Different "Intelligibility" for Different Folks , 2020, AIES.

[60]  Marc Hanheide,et al.  Robot task planning and explanation in open and uncertain worlds , 2017, Artif. Intell..

[61]  Pat Langley,et al.  Seeing Beyond Shadows: Incremental Abductive Reasoning for Plan Understanding , 2013, AAAI Workshop: Plan, Activity, and Intent Recognition.

[62]  Subbarao Kambhampati,et al.  (When) Can AI Bots Lie? , 2019, AIES.

[63]  Daniel Bryce,et al.  Maintaining Evolving Domain Models , 2016, IJCAI.