Incorporating Transparency During Trust-Guided Behavior Adaptation

An important consideration in human-robot teams is ensuring that the robot is trusted by its teammates. Without adequate trust, the robot may be underutilized or disused, potentially exposing human teammates to dangerous situations. We have previously investigated an agent that can assess its own trustworthiness and adapt its behavior accordingly. In this paper we extend our work by adding a transparency layer that allows the agent to explain why it adapted its behavior. The agent uses explanations based on explicit feedback received from an operator. This allows it to provide simple, concise, and understandable explanations. We evaluate our system on scenarios from a simulated robotics domain by demonstrating that the agent can provide explanations that closely align with an operator’s feedback.

[1]  Holly A. Yanco,et al.  Potential measures for detecting trust changes , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[2]  David W. Aha,et al.  Improving Trust-Guided Behavior Adaptation Using Operator Feedback , 2015, ICCBR.

[3]  Anders Kofod-Petersen,et al.  Explanations and Context in Ambient Intelligent Systems , 2007, CONTEXT.

[4]  Judith Masthoff,et al.  A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.

[5]  Matthew Klenk,et al.  DiscoverHistory: understanding the past in planning and execution , 2012, AAMAS.

[6]  Padraig Cunningham,et al.  Explanation Oriented Retrieval , 2004, ECCBR.

[7]  Kevin D. Ashley,et al.  Combining Case-Based and Model-Based Reasoning for Predicting the Outcome of Legal Cases , 2003, ICCBR.

[8]  Agnar Aamodt,et al.  Explanation in Case-Based Reasoning–Perspectives and Goals , 2005, Artificial Intelligence Review.

[9]  Peter Herrmann,et al.  Analogical Trust Reasoning , 2009, IFIPTM.

[10]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.

[11]  Barry Smyth,et al.  Provenance, Trust, and Sharing in Peer-to-Peer Case-Based Web Search , 2008, ECCBR.

[12]  David McSherry,et al.  Explanation in Recommender Systems , 2005, Artificial Intelligence Review.

[13]  David B. Leake,et al.  Case Provenance: The Value of Remembering Case Sources , 2007, ICCBR.

[14]  Fakhri Karray,et al.  Modelling of robot attention demand in human-robot interaction using finite fuzzy state automata , 2012, 2012 IEEE International Conference on Fuzzy Systems.

[15]  S. Craw,et al.  Visualisation of Case-Base Reasoning for Explanation , 2004 .

[16]  David W. Aha,et al.  How Much Do You Trust Me? Learning a Case-Based Model of Inverse Trust , 2014, ICCBR.

[17]  Jessie Y. C. Chen,et al.  Human-Autonomy Teaming and Agent Transparency , 2016, IUI Companion.

[18]  Thomas Roth-Berghofer,et al.  Explanations and Case-Based Reasoning: Foundational Issues , 2004, ECCBR.

[19]  Holly A. Yanco,et al.  Robot confidence and trust alignment , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[21]  Jordi Sabater-Mir,et al.  Review on Computational Trust and Reputation Models , 2005, Artificial Intelligence Review.

[22]  Padraig Cunningham,et al.  An Evaluation of the Usefulness of Case-Based Explanation , 2003, ICCBR.

[23]  Barry Smyth,et al.  Great Explanations: Opinionated Explanations for Recommendations , 2015, ICCBR.

[24]  Agnar Aamodt,et al.  Explanation-Driven Case-Based Reasoning , 1993, EWCBR.