Agent Transparency: A Review of Current Theory and Evidence

As machines and agents become more autonomous, it has been increasingly clear to human factors/ergonomics researchers and practitioners that agent transparency is a critical issue for effective human–agent teaming. Transparency methods can provide the foundation for establishing shared awareness and shared intent between humans and intelligent machines. However, to date, the existing body of research on agent transparency has not been systematically documented. The purpose of this article is to summarize and evaluate current psychological theories and empirical evidence regarding effective agent transparency in human–autonomy teaming. We start by examining how transparency has been operationalized in the literature by discussing the two prominent theoretical frameworks of human–autonomy teaming. We then present a review of the empirical findings concerning how transparency affects key human–autonomy teaming variables, such as operator accuracy, decision time, situation awareness, perceived usability, and workload. This article includes an overview of the experimental tasks, scenarios, and interfaces that have been used in past studies and synthesizes how transparency has been operationalized and manipulated by prior studies. We then summarize the results and conclude by providing key recommendations for future research.

[1]  Keith S. Jones,et al.  An Investigation of the Prevalence of Replication Research in Human Factors , 2010, Hum. Factors.

[2]  Lu Wang,et al.  Trust and Reliance on an Automated Combat Identification System , 2009, Hum. Factors.

[3]  Jessie Y. C. Chen,et al.  The Effects of Agent Transparency on Human Interaction with an Autonomous Robotic Agent , 2015 .

[4]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[5]  Jessie Y. C. Chen,et al.  The Effect of Agent Reasoning Transparency on Complacent Behavior: An Analysis of Eye Movements and Response Performance , 2017 .

[6]  Christopher A. Miller,et al.  Delegation and Transparency: Coordinating Interactions So Information Exchange Is No Surprise , 2014, HCI.

[7]  Jessie Y. C. Chen,et al.  Agent Reasoning Transparency: The Influence of Information Level on Automation Induced Complacency , 2017 .

[8]  P A Hancock,et al.  Imposing limits on autonomous systems , 2017, Ergonomics.

[9]  Jessie Y. C. Chen,et al.  Situation awareness-based agent transparency and human-autonomy teaming effectiveness , 2018 .

[10]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[11]  Christopher D. Wickens,et al.  The benefits of imperfect diagnostic automation: a synthesis of the literature , 2007 .

[12]  Jessie Y. C. Chen,et al.  A Proposed Approach for Determining the Influence of Multimodal Robot-of-Human Transparency Information on Human-Agent Teams , 2016, HCI.

[13]  Jessie Y. C. Chen,et al.  Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues , 2014, IEEE Transactions on Human-Machine Systems.

[14]  Michael A. Goodrich,et al.  Human Factors Perspective on Next Generation Unmanned Aerial Systems , 2015 .

[15]  Michael J. Barnes,et al.  The Effects of Information Level on Human-Agent Interaction for Route Planning , 2015 .

[16]  Mica R. Endsley,et al.  The Out-of-the-Loop Performance Problem and Level of Control in Automation , 1995, Hum. Factors.

[17]  Mary L. Cummings,et al.  Automation Architecture for Single Operator, Multiple UAV Command and Control, , 2007 .

[18]  Thomas B. Sheridan,et al.  Telerobotics, Automation, and Human Supervisory Control , 2003 .

[19]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[20]  Thomas B. Sheridan,et al.  Human and Computer Control of Undersea Teleoperators , 1978 .

[21]  Nancy J. Cooke,et al.  Teaming With a Synthetic Teammate: Insights into Human-Autonomy Teaming , 2018, Hum. Factors.

[22]  Mica R. Endsley,et al.  Toward a Theory of Situation Awareness in Dynamic Systems , 1995, Hum. Factors.

[23]  Mark R. Lehto,et al.  Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .

[24]  Jessie Y. C. Chen,et al.  Agent Reasoning Transparency’s Effect on Operator Workload , 2016 .

[25]  Michael A. Rupp,et al.  Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.

[26]  Jessie Y. C. Chen,et al.  Intelligent Agent Transparency , 2016 .

[27]  Glenn F. Wilson,et al.  Human-Automation Interaction Research , 2013 .

[28]  Anand S. Rao,et al.  BDI Agents: From Theory to Practice , 1995, ICMAS.

[29]  H. Pashler,et al.  Editors’ Introduction to the Special Section on Replicability in Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[30]  Michael A. Rupp,et al.  Insights into Human-Agent Teaming: Intelligent Agent Transparency and Uncertainty , 2017 .

[31]  James L. Szalma,et al.  A Meta-Analysis of Factors Influencing the Development of Trust in Automation , 2016, Hum. Factors.

[32]  B. Rouse William,et al.  Adaptive Aiding for Human/Computer Control , 1988 .

[33]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[34]  Raja Parasuraman,et al.  Effects of Imperfect Automation on Decision Making in a Simulated Command and Control Task , 2007, Hum. Factors.

[35]  Michael A. Rupp,et al.  Effects of Agent Transparency on Multi-Robot Management Effectiveness , 2015 .

[36]  Jessie Y. C. Chen,et al.  The Effect of Agent Reasoning Transparency on Automation Bias: An Analysis of Response Performance , 2016, HCI.

[37]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[38]  Joseph B. Lyons,et al.  Being Transparent about Transparency: A Model for Human-Robot Interaction , 2013, AAAI Spring Symposium: Trust and Autonomous Systems.

[39]  Florian Jentsch,et al.  Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems , 2014, Defense + Security Symposium.

[40]  Stephen M Fiore,et al.  Augmenting team cognition in human-automation teams performing in complex operational environments. , 2007, Aviation, space, and environmental medicine.

[41]  Michael W. Boyce,et al.  Situation Awareness-Based Agent Transparency , 2014 .

[42]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[43]  Pamela J. Hinds,et al.  Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[44]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .

[45]  Shayne Loft,et al.  Optimizing the Balance Between Task Automation and Human Manual Control in Simulated Submarine Track Management , 2017, Journal of experimental psychology. Applied.

[46]  Jessie Y. C. Chen,et al.  Agent Transparency and the Autonomous Squad Member , 2016 .

[47]  Mica R. Endsley,et al.  Measurement of Situation Awareness in Dynamic Systems , 1995, Hum. Factors.