Application of a System-Wide Trust Strategy when Supervising Multiple Autonomous Agents

When interacting with complex systems, the manner in which an operator trusts automation influences system performance. Recent studies have demonstrated that people tend to apply trust broadly rather than exhibiting specific trust in each component of the system in a calibrated manner (e.g. Keller & Rice, 2010). While this System–Wide Trust effect has been established for basic situations such as judging gauges, it has not been studied in realistic settings such as collaboration with autonomous agents in a multi-agent system. This study utilized a multiple UAV control simulation, to explore how people apply trust in multi autonomous agents in a supervisory control setting. Participants interacted with four UAVs that utilized automated target recognition (ATR) systems to identify targets as enemy or friendly. When one of the autonomous agents was inaccurate and performance information was provided, participants were 1) less accurate, 2) more likely to verify the ATR’s determination, 3) spent more time verifying images, and 4) rated the other systems as less trustworthy even though they were 100% correct. These findings support previous work that demonstrated the prevalence of system-wide trust and expand the conditions in which system-wide trust strategies are applied. This work suggests that multi-agent systems should provide carefully designed cues and training to mitigate the system-wide trust effect.

[1]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[2]  Linda G. Pierce,et al.  The Perceived Utility of Human and Automated Aids in a Visual Detection Task , 2002, Hum. Factors.

[3]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[4]  Linda G. Pierce,et al.  Automation Usage Decisions: Controlling Intent and Appraisal Errors in a Target Detection Task , 2007, Hum. Factors.

[5]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[6]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[7]  Stephen Rice,et al.  System-Wide versus Component-Specific Trust Using Multiple Aids , 2009, The Journal of general psychology.

[8]  Stephen Rice,et al.  Using System-Wide Trust Theory to Make Predictions About Dependence on Four Diagnostic Aids , 2010, The Journal of general psychology.

[9]  Stephen Rice,et al.  Using System-Wide Trust Theory to Reveal the Contagion Effects of Automation False Alarms and Misses on Compliance and Reliance in a Simulated Aviation Task , 2013 .

[10]  Shawn L. Berman,et al.  The Structure of Optimal Trust: Moral and Strategic Implications , 1999 .

[11]  Shoshana Zuboff In the Age of the Smart Machine , 1988 .

[12]  Abuse Humans and Automation : Use , Misuse , Disuse , , 2008 .

[13]  Linda G. Pierce,et al.  Predicting Misuse and Disuse of Combat Identification Systems , 2001 .

[14]  Dietrich Manzey,et al.  Misuse of automated decision aids: Complacency, automation bias and the impact of training experience , 2008, Int. J. Hum. Comput. Stud..

[15]  Daniel R. Ilgen,et al.  Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions , 2008, Hum. Factors.

[16]  Thomas A. Dingus,et al.  Human Factors Field Evaluation of Automotive Headway Maintenance/Collision Warning Devices , 1997, Hum. Factors.

[17]  Linda G. Pierce,et al.  Misuse and Disuse of Automated AIDS , 1999 .