Explicability versus Explanations in Human-Aware Planning

Human aware planning requires an agent to be aware of the mental model of the human in the loop during its decision process. This can involve generating plans that are explicable to the human as well as the ability to provide explanations when such plans cannot be generated. In this paper, we bring these two concepts together and show how an agent can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA*. This provides a revised perspective of what it means for an AI agent to be "human-aware" by bringing together recent works on explicable planning and plans explanations under the umbrella of a single plan generation process. We illustrate these concepts using a robot involved in a typical search and reconnaissance task with an external supervisor.