Human-agent collaborations : trust in negotiating control

For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.

[1]  J. G. Holmes,et al.  Trust in close relationships. , 1985 .

[2]  R. M. Taylor,et al.  Workload or Situational Awareness?: TLX vs. SART for Aerospace Systems Design Evaluation , 1991 .

[3]  Raja Parasuraman,et al.  Automation- Induced "Complacency": Development of the Complacency-Potential Rating Scale , 1993 .

[4]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[5]  Ewart de Visser,et al.  Measurement of trust in human-robot collaboration , 2007, 2007 International Symposium on Collaborative Technologies and Systems.

[6]  Carolyn Penstein Rosé,et al.  Architecture for Building Conversational Agents that Support Collaborative Learning , 2011, IEEE Transactions on Learning Technologies.

[7]  C. Acemyan,et al.  The Relationship Between Trust and Usability in Systems , 2012 .

[8]  Raja Parasuraman,et al.  The World is not Enough: Trust in Cognitive Agents , 2012 .

[9]  Aaron F. Bobick,et al.  Anticipating human actions for collaboration in the presence of task and sensor uncertainty , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Ericka Rovira,et al.  Understanding reliance on automation: effects of error type, error distribution, age and experience , 2014, Theoretical issues in ergonomics science.

[11]  Michael A. Rupp,et al.  Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.

[12]  Ning Wang,et al.  The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams , 2016, AAMAS.

[13]  Juergen Sauer,et al.  Experience of automation failures in training: effects on trust, automation bias, complacency and performance , 2016, Ergonomics.

[14]  Moritz Körber,et al.  Introduction matters: Manipulating trust in automation and reliance in automated driving. , 2018, Applied ergonomics.

[15]  Ana Paiva,et al.  Exploring the Impact of Fault Justification in Human-Robot Trust , 2018, AAMAS.