Expectation-Aware Planning: A General Framework for Synthesizing and Executing Self-Explaining Plans for Human-AI Interaction

In this work, we present a general formulation for decision making in human-in-the-loop planning problems where the human’s expectations about an autonomous agent may differ from the agent’s own model. We show how our formulation for such multi-model planning problems allows us to capture existing approaches to this problem and also be used to generate novel explanatory behaviors. Our formulation also reveals a deep connection between multi-model planning and epistemic planning and we show how we can leverage classical planning compilations designed for epistemic planning for solving multi-model planning problems. We empirically show how this new compilation provides a computational advantage over previous approaches that separate reasoning about model reconciliation and identifying the agent’s plan.

[1]  François Schwarzentruber,et al.  Complexity Results in Epistemic Planning , 2015, IJCAI.

[2]  Yu Zhang,et al.  Explicable Robot Planning as Minimizing Distance from Expected Behavior , 2016, ArXiv.

[4]  Anca D. Dragan,et al.  Expressing Robot Incapability , 2018, 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Hector Geffner,et al.  Beliefs In Multiagent Planning: From One Agent to Many , 2015, ICAPS.

[6]  Christian J. Muise,et al.  Planning Over Multi-Agent Epistemic States: A Classical Planning Approach , 2015, AAAI.

[7]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Danna Zhou,et al.  d. , 1934, Microbial pathogenesis.

[9]  Hai Wan,et al.  A General Multi-agent Epistemic Planner Based on Higher-order Belief Change , 2018, IJCAI.

[10]  Siddhartha S. Srinivasa,et al.  Legibility and predictability of robot motion , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[12]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.

[13]  Bernhard Nebel,et al.  On the expressive power of planning formalisms , 2000 .

[14]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[15]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[16]  Ramsundar Kalpagam Ganesan Mediating Human-Robot Collaboration through Mixed Reality Cues , 2017 .

[17]  Miquel Ramírez,et al.  Action Selection for Transparent Planning , 2018, AAMAS.

[18]  Ross A. Knepper,et al.  Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.

[19]  Hector Geffner,et al.  Multiagent Online Planning with Nested Beliefs and Dialogue , 2017, ICAPS.

[20]  Subbarao Kambhampati,et al.  Explicability versus Explanations in Human-Aware Planning , 2018, AAMAS.

[21]  Subbarao Kambhampati,et al.  Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[22]  Enrico Pontelli,et al.  EFP and PG-EFP: Epistemic Forward Search Planners in Multi-Agent Domains , 2018, ICAPS.

[23]  Roger Lamb,et al.  Attribution in conversational context: Effect of mutual knowledge on explanation‐giving , 1993 .

[24]  Subbarao Kambhampati,et al.  Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations , 2018, IJCAI.