Explanations as Model Reconciliation - A Multi-Agent Perspective

In this paper, we demonstrate how a planner (or a robot as an embodiment of it) can explain its decisions to multiple agents in the loop together considering not only the model that it used to come up with its decisions but also the (often misaligned) models of the same task that the other agents might have had. To do this, we build on our previous work on multimodel explanation generation (Chakraborti et al. 2017b) and extend it to account for settings where there is uncertainty of the robot’s model of the explainee and/or there are multiple explainees with different models to explain to. We will illustrate these concepts in a demonstration on a robot involved in a typical search and reconnaissance scenario with another human teammate and an external human supervisor. In (Chakraborti et al. 2017b) we showed how a robot can explain its decisions to a human in the loop who might have a different understanding of the same problem (either in terms of the agent’s knowledge or intentions, or in terms of its capabilities). These explanations are intended to bring the human’s mental model closer to the robot’s estimation of the ground truth – we refer to this as the model reconciliation process by the end of which a plan that is optimal in the robot’s model is also optimal in the human’s updated mental model. We also showed how this process can be achieved successfully while transferring the minimum number of model updates possible via what we call minimally complete explanations or MCEs. Such techniques can be essential contributors to the dynamics of trust and teamwork in human-agent collaborations by significantly lowering the communication overhead between agents while at the same time providing the right amount of information to keep the agents on the same page with respect to their understanding of each others’ tasks and capabilities – thereby reducing the cognitive burden on the human teammates and increasing their situational awareness. The process of model reconciliation is illustrated in Figure 1. The robot’s model, which is its ground truth, is represented byM (note: “model” of a planning problem includes the state and goals information as well as the domain or action model) and π∗ MR is the optimal plan in it. A human H who is interacting with it may have a different modelMh ∗Authors marked with asterix contributed equally. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The model reconciliation process in case of model uncertainty or multiple explainees. of the same planning problem, and the optimal plan π∗ Mh in the human’s model can diverge from that of the robot’s leading to the robot needing to explain it’s decision to the human. As explained above, a multi-model explanation is an update or correction to the human’s mental model to a new model M̂h where the optimal plan π∗ M̂h is equivalent to π∗ MR . Imagine that the planner is now required to explain the same problem to multiple different human teammatesHi, or if the model of the human is not known with certainty (which is an equivalent setting with multiple possible models). The robot can, of course, call upon the previous service to compute MCEs for each such configuration. However, this can result in situations where the explanations computed for individual models independently are not consistent across the all the possible target domains. In the case of multiple teammates being explained to, this may cause confusion and loss of trust; and in the case of model uncertainty, such an approach cannot even guarantee that the resulting explanation will be an acceptable explanation in the real domain. Instead, we want to find an explanation such that ∀i π∗ M̂Rhi ≡ π∗ MR , i.e. a single model update that makes the given plan optimal in all the updated domains (or in all possible domains). At first glance, it appears that such an approach, even though desirable, might turn out to be prohibitively expensive especially since solving for a single MCE involves search in the model space where each search node is a optimal planning problem. However, it turns out that the exact same search strategy can be employed here as well by modifying the way in which the models are represented and the equivalence criterion is computed during the search process. Thus, in this paper, we (1) outline how uncertainty over models in the multi-model planning setting can be represented in the form of annotated models; (2) show how the search for a minimally complete explanation in the revised setting can be compiled to the original MCE search based on this representation; and (3) demonstrate these concepts on a typical search and reconnaissance setting involving a robot and its human teammate internal to a disaster scene and an external human commander supervising the proceedings.

[1]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[2]  Subbarao Kambhampati,et al.  Discovering Underlying Plans Based on Distributed Representations of Actions , 2016, AAMAS.

[3]  Subbarao Kambhampati,et al.  Balancing Explicability and Explanation in Human-Aware Planning , 2017, AAAI Fall Symposia.

[4]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Yu Zhang,et al.  AI Challenges in Human-Robot Cognitive Teaming , 2017, ArXiv.

[6]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[7]  Daniel Bryce,et al.  Maintaining Evolving Domain Models , 2016, IJCAI.

[8]  Craig A. Knoblock,et al.  PDDL-the planning domain definition language , 1998 .

[9]  Subbarao Kambhampati,et al.  Robust planning with incomplete domain models , 2017, Artif. Intell..

[10]  T. L. McCluskey,et al.  Acquiring planning domain models using LOCM , 2013, The Knowledge Engineering Review.

[11]  Cade Earl Bartlett Communication between Teammates in Urban Search and Rescue , 2015 .

[12]  Qiang Yang,et al.  Learning Actions Models from Plan Examples with Incomplete Knowledge , 2005, ICAPS.

[13]  Subbarao Kambhampati,et al.  Generating diverse plans to handle unknown and partially known user preferences , 2012, Artif. Intell..

[14]  Subbarao Kambhampati,et al.  Model-lite Planning for the Web Age Masses: The Challenges of Planning with Incomplete and Evolving Domain Models , 2007, AAAI.