Building Appropriate Trust in Human-Robot Teams

Future robotic systems are expected to transition from tools to teammates , characterized by increasingly autonomous, intelligent robots interacting with humans in a more naturalistic manner, approaching a relationship more akin to human–human teamwork. Given the impact of trust observed in other systems, trust in the robot team member will likely be critical to effective and safe performance. Our thesis for this paper is that trust in a robot team member must be appropriately calibrated rather than simply maximized.  We describe how the human team member’s understanding of the system contributes to trust in human-robot teaming, by evoking mental model theory. We discuss how mental models are related to physical and behavioral characteristics of the robot, on the one hand, and affective and behavioral outcomes, such as trust and system use/disuse/misuse, on the other.  We expand upon our discussion by providing recommendations for best practices in human-robot team research and design and other systems using artificial intelligence.

[1]  K. J. Craik,et al.  The nature of explanation , 1944 .

[2]  F. Heider,et al.  An experimental study of apparent behavior , 1944 .

[3]  E. Salas,et al.  Shared mental models in expert team decision making. , 1993 .

[4]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[5]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[6]  M. Gill,et al.  On the genesis of confidence. , 1998 .

[7]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[8]  A. M. Rich,et al.  Automated diagnostic aids: The effects of aid reliability on users' trust and reliance , 2001 .

[9]  Sara B. Kiesler,et al.  Mental models of robotic assistants , 2002, CHI Extended Abstracts.

[10]  Brian R. Duffy,et al.  Anthropomorphism and the social robot , 2003, Robotics Auton. Syst..

[11]  Robin R. Murphy,et al.  Human-robot interaction in rescue robotics , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[12]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[13]  Matthew G. Chin,et al.  Anthropomorphism of Robotic Forms: A Response to Affordances? , 2005 .

[14]  Robin R. Murphy,et al.  How UGVs physically fail in the field , 2005, IEEE Transactions on Robotics.

[15]  Sara B. Kiesler,et al.  Human Mental Models of Humanoid Robots , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[16]  Florian Jentsch,et al.  Automated Systems in the Cockpit: Is the Autopilot, "George," a Team Member? , 2006 .

[17]  P. W. Singer,et al.  Wired for War: The Robotics Revolution and Conflict in the 21st Century , 2009 .

[18]  Siddhartha S. Srinivasa,et al.  Gracefully mitigating breakdowns in robotic services , 2010, HRI 2010.

[19]  Florian Jentsch,et al.  From Tools to Teammates , 2011 .

[20]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.

[21]  Manja Lohse,et al.  Bridging the gap between users' expectations and system evaluations , 2011, 2011 RO-MAN.

[22]  Peter A. Hancock,et al.  Trust in Unmanned Aerial Systems: A Synthetic, Distributed Trust Model , 2011 .

[23]  Mark Coeckelbergh,et al.  Humans, Animals, and Robots: A Phenomenological Approach to Human-Robot Relations , 2011, Int. J. Soc. Robotics.

[24]  Florian Jentsch,et al.  Human-animal teams as an analog for future human-robot teams , 2012 .