Human-Aware Planning Revisited : A Tale of Three Models

Human-aware planning requires an agent to be aware of the mental model of the humans, in addition to their physical or capability model. This not only allows an agent to envisage the desired roles of the human in a joint plan but also anticipate how its plan will be perceived by the latter. The human mental model becomes especially useful in the context of an explainable planning (XAIP) agent since an explanatory process cannot be a soliloquy, i.e. it must incorporate the human’s beliefs and expectations of the planner. In this paper, we survey our recent efforts in this direction. Cognitive AI teaming (Chakraborti et al. 2017a) requires a planner to perform argumentation over a set of models during the plan generation process. This is illustrated in Figure 1. Here, M is the model of the agent embodying the planner (e.g. a robot), and M is the model of the human in the loop. Further,Mh is the model the human thinks the robot has, andMr is the model that the robot thinks the human has. Finally, M̃h is the robot’s approximation ofMh ; for the rest of the paper we will be using Mh to refer to both since, for all intents and purposes, this is all the robot has access to. Note that the human mental modelMh is in addition to the (robot’s belief of the) human modelMr traditionally encountered in human-robot teaming (HRT) settings and is, in essence, the fundamental thesis of the recent works on plan explanations (Chakraborti et al. 2017b) and explicable planning (Zhang et al. 2017). The need for explicable planning or plan explanations occurs when the models – M and Mh – diverge so that the optimal plans in the respective models may not be the same and hence optimal behavior of the robot in its own model is inexplicable to the human. This is also true for discrepancies betweenM and Mr when the robot might reveal unrealistic expectations of the human in a joint plan. An explainable planning (XAIP) agent (Fox et al. 2017; Langley et al. 2017; Weld and Bansal 2018) should be able to able to deal with such model differences and participate in explanatory dialog with the human such that both of them can be on the same page during a collaborative activity. This is referred to as model reconciliation (Chakraborti et al. 2017b) and forms the core of the explanatory process of an XAIP agent. In this paper, we look at the scope of problems engendered by this multi-model setting and describe Figure 1: Argumentation over multiple models during the deliberative process of a human-aware planner (e.g. robot). the recent work in this direction. Specifically – We outline the scope of behaviors engendered by humanaware planning, including joint planning as studied in teaming using the human model, as well as explicable planning with the human mental model; We situate the plan explanation problem in the context of perceived inexplicability of the robot’s plans or behaviors due to differences in these models; We discuss how the plan explanation process can be seen as one of model reconciliation whereMh (and/or Mr ) is brought closer toM (M ); We discuss how explicability and explanation costs can be traded off during plan generation; We discuss how this process can be adapted to handle uncertainty or multiple humans in the loop; We discuss results of a user study that testify to the usefulness of the model reconciliation process; We point to ongoing work in the space of abstractions and deception using the human mental model.

[1]  Scott I. Tannenbaum,et al.  Do Team and Individual Debriefs Enhance Performance? A Meta-Analysis , 2013, Hum. Factors.

[2]  Lars Karlsson,et al.  Grandpa Hates Robots - Interaction Constraints for Planning in Inhabited Environments , 2014, AAAI.

[3]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[4]  Subbarao Kambhampati,et al.  Domain Independent Approaches for Finding Diverse Plans , 2007, IJCAI.

[5]  Alessandro Saffiotti,et al.  Human-aware task planning: An application to mobile robots , 2010, TIST.

[6]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[7]  Stuart J. Russell,et al.  Principles of Metareasoning , 1989, Artif. Intell..

[8]  Thomas Eiter,et al.  Updating action domain descriptions☆ , 2005, IJCAI.

[9]  Yu Zhang,et al.  Planning for serendipity , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Daniel S. Weld,et al.  Intelligible Artificial Intelligence , 2018, ArXiv.

[11]  Simon Parsons,et al.  Argumentation strategies for plan resourcing , 2011, AAMAS.

[12]  Rachid Alami,et al.  Toward Human-Aware Robot Task Planning , 2006, AAAI Spring Symposium: To Boldly Go Where No Human-Robot Team Has Gone Before.

[13]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[14]  T. Lombrozo Explanation and Abductive Inference , 2012 .

[15]  T. Lombrozo The structure and function of explanations , 2006, Trends in Cognitive Sciences.

[16]  Stuart J. Russell,et al.  Angelic Semantics for High-Level Actions , 2007, ICAPS.

[17]  Yu Zhang,et al.  Planning with Resource Conflicts in Human-Robot Cohabitation , 2016, AAMAS.

[18]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[19]  Daniel Bryce,et al.  Maintaining Evolving Domain Models , 2016, IJCAI.

[20]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[21]  Elliot E. Entin,et al.  Adaptive Team Coordination , 1999, Hum. Factors.

[22]  Pat Langley,et al.  Seeing Beyond Shadows: Incremental Abductive Reasoning for Plan Understanding , 2013, AAAI Workshop: Plan, Activity, and Intent Recognition.

[23]  Yu Zhang,et al.  Explicable Robot Planning as Minimizing Distance from Expected Behavior , 2016, ArXiv.

[24]  Sailik Sengupta,et al.  RADAR - A Proactive Decision Support System for Human-in-the-Loop Planning , 2017, AAAI Fall Symposia.

[25]  Iyad Rahwan,et al.  Agreeing on plans through iterated disputes , 2010, AAMAS.

[26]  Shirin Sohrabi,et al.  A Novel Iterative Approach to Top-k Planning , 2018, ICAPS.

[27]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[28]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.

[29]  S. Kambhampati,et al.  Plan Explicability for Robot Task Planning , 2010 .

[30]  J. Dessalles,et al.  Reasoning as a lie detection device , 2011, Behavioral and Brain Sciences.

[31]  Jorge A. Baier,et al.  Preferred Explanations: Theory and Generation via Planning , 2011, AAAI.

[32]  Marc Cavazza,et al.  Automated Extension of Narrative Planning Domains with Antonymic Operators , 2015, AAMAS.

[33]  Carlos Guestrin,et al.  Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.

[34]  Subbarao Kambhampati A Classification of Plan Modification Strategies Based on Coverage and Information Requirements , 1990 .

[35]  Bernhard Nebel,et al.  Coming up With Good Excuses: What to do When no Plan Can be Found , 2010, Cognitive Robotics.

[36]  Subbarao Kambhampati,et al.  Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior Explanations , 2020, AAAI.

[37]  Cade Earl Bartlett Communication between Teammates in Urban Search and Rescue , 2015 .

[38]  Subbarao Kambhampati,et al.  Generating diverse plans to handle unknown and partially known user preferences , 2012, Artif. Intell..

[39]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[40]  Subbarao Kambhampati,et al.  Implicit Robot-Human Communication in Adversarial and Collaborative Environments , 2018, ArXiv.

[41]  Matthias Scheutz,et al.  Coordination in human-robot teams using mental modeling and plan recognition , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[42]  Alessandro Saffiotti,et al.  Too cool for school - adding social constraints in human aware planning , 2014 .

[43]  Subbarao Kambhampati,et al.  Algorithms for the Greater Good! On Mental Modeling and Acceptable Symbiosis in Human-AI Collaboration , 2018, ArXiv.

[44]  Marcello Cirillo Planning in Inhabited Environments , 2011, KI - Künstliche Intelligenz.

[45]  Sailik Sengupta,et al.  MA-RADAR – A Mixed-Reality Interface for Collaborative Decision Making , 2018 .

[46]  Craig A. Knoblock,et al.  PDDL-the planning domain definition language , 1998 .

[47]  Pat Langley,et al.  Explainable Agency for Intelligent Autonomous Systems , 2017, AAAI.

[48]  Subbarao Kambhampati,et al.  Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation , 2018, ICAPS.

[49]  Andreas Herzig,et al.  On the revision of planning tasks , 2014, ECAI.

[50]  Yu Zhang,et al.  AI Challenges in Human-Robot Cognitive Teaming , 2017, ArXiv.

[51]  Rachid Alami,et al.  On human-aware task and motion planning abilities for a teammate robot , 2014 .

[52]  Nancy J. Cooke,et al.  Interactive Team Cognition , 2013, Cogn. Sci..

[53]  Stuart J. Russell,et al.  Metaphysics of Planning Domain Descriptions , 2016, AAAI.

[54]  Stephanie Rosenthal,et al.  Dynamic generation and refinement of robot verbalization , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[55]  Rachel K. E. Bellamy,et al.  Visualizations for an Explainable Planning Agent , 2017, IJCAI.

[56]  Subbarao Kambhampati,et al.  Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[57]  Subbarao Kambhampati,et al.  Plan Explanations as Model Reconciliation - An Empirical Study , 2018, ArXiv.

[58]  Subbarao Kambhampati,et al.  Robust planning with incomplete domain models , 2017, Artif. Intell..