Explicablility as Minimizing Distance from Expected Behavior

In order to have effective human-AI collaboration, it is necessary to address how the AI agent's behavior is being perceived by the humans-in-the-loop. When the agent's task plans are generated without such considerations, they may often demonstrate inexplicable behavior from the human's point of view. This problem may arise due to the human's partial or inaccurate understanding of the agent's planning model. This may have serious implications from increased cognitive load to more serious concerns of safety around a physical agent. In this paper, we address this issue by modeling plan explicability as a function of the distance between a plan that agent makes and the plan that human expects it to make. We learn a regression model for mapping the plan distances to explicability scores of plans and develop an anytime search algorithm that can use this model as a heuristic to come up with progressively explicable plans. We evaluate the effectiveness of our approach in a simulated autonomous car domain and a physical robot domain.

[1]  Subbarao Kambhampati,et al.  Domain Independent Approaches for Finding Diverse Plans , 2007, IJCAI.

[2]  Ross A. Knepper,et al.  Implicit Communication in a Joint Action , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[3]  Sailik Sengupta,et al.  RADAR - A Proactive Decision Support System for Human-in-the-Loop Planning , 2017, AAAI Fall Symposia.

[4]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[5]  Shlomo Zilberstein,et al.  Online Decision-Making for Scalable Autonomous Systems , 2017, IJCAI.

[6]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Rachid Alami,et al.  A Human Aware Mobile Robot Motion Planner , 2007, IEEE Transactions on Robotics.

[8]  Rachid Alami,et al.  Task planning for human-robot interaction , 2005, sOc-EUSAI '05.

[9]  Alessandro Saffiotti,et al.  Human-aware task planning: An application to mobile robots , 2010, TIST.

[10]  Rachid Alami,et al.  Planning human centered robot activities , 2007, 2007 IEEE International Conference on Systems, Man and Cybernetics.

[11]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[12]  Yu Zhang,et al.  Planning for serendipity , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[13]  Subbarao Kambhampati,et al.  Generating diverse plans to handle unknown and partially known user preferences , 2012, Artif. Intell..

[14]  Ari K. Jónsson,et al.  MAPGEN: Mixed-Initiative Planning and Scheduling for the Mars Exploration Rover Mission , 2004, IEEE Intell. Syst..

[15]  Anca D. Dragan,et al.  Generating Plans that Predict Themselves , 2018, WAFR.

[16]  Anca D. Dragan,et al.  Planning for Autonomous Cars that Leverage Effects on Human Actions , 2016, Robotics: Science and Systems.

[17]  John Yen,et al.  The influence of agent reliability on trust in human-agent collaboration , 2008, ECCE '08.

[18]  Siddhartha S. Srinivasa,et al.  Generating Legible Motion , 2013, Robotics: Science and Systems.

[19]  Rachid Alami,et al.  Planning human-aware motions using a sampling-based costmap planner , 2011, 2011 IEEE International Conference on Robotics and Automation.

[20]  Yu Zhang,et al.  Planning with Resource Conflicts in Human-Robot Cohabitation , 2016, AAMAS.