Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management

Objective: We investigated the effects of level of agent transparency on operator performance, trust, and workload in a context of human–agent teaming for multirobot management. Background: Participants played the role of a heterogeneous unmanned vehicle (UxV) operator and were instructed to complete various missions by giving orders to UxVs through a computer interface. An intelligent agent (IA) assisted the participant by recommending two plans—a top recommendation and a secondary recommendation—for every mission. Method: A within-subjects design with three levels of agent transparency was employed in the present experiment. There were eight missions in each of three experimental blocks, grouped by level of transparency. During each experimental block, the IA was incorrect three out of eight times due to external information (e.g., commander’s intent and intelligence). Operator performance, trust, workload, and usability data were collected. Results: Results indicate that operator performance, trust, and perceived usability increased as a function of transparency level. Subjective and objective workload data indicate that participants’ workload did not increase as a function of transparency. Furthermore, response time did not increase as a function of transparency. Conclusion: Unlike previous research, which showed that increased transparency resulted in increased performance and trust calibration at the cost of greater workload and longer response time, our results support the benefits of transparency for performance effectiveness without additional costs. Application: The current results will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams.

[1]  T. Levine,et al.  Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research , 2002 .

[2]  Ruth B. Ekstrom,et al.  Manual for kit of factor-referenced cognitive tests , 1976 .

[3]  Nadine B. Sarter,et al.  How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control , 1995, Hum. Factors.

[4]  M. Sheelagh T. Carpendale,et al.  Visualization of Uncertainty and Reasoning , 2007, Smart Graphics.

[5]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[6]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[7]  Jessie Y. C. Chen,et al.  Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design , 2011, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[8]  Jessie Y. C. Chen,et al.  Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues , 2014, IEEE Transactions on Human-Machine Systems.

[9]  Eric N. Wiebe,et al.  The Effects of Automated Decision Algorithm Modality and Transparency on Reported Trust and Task Performance , 2008 .

[10]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[11]  Jeffrey M. Bradshaw,et al.  Coordination in Human-Agent-Robot Teamwork , 2008, 2008 International Symposium on Collaborative Technologies and Systems.

[12]  Justin G. Hollands,et al.  Beyond Identity: Incorporating System Reliability Information Into an Automated Combat Identification System , 2011, Hum. Factors.

[13]  Miles MacLeod,et al.  Usability measurement in context , 1994, Behav. Inf. Technol..

[14]  Raja Parasuraman,et al.  Designing for Flexible Interaction Between Humans and Automation: Delegation Interfaces for Supervisory Control , 2007, Hum. Factors.

[15]  Akademisk Avhandling,et al.  Transparency for Future Semi-Automated Systems , 2014 .

[16]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[17]  F. Paas,et al.  Instructional control of cognitive load in the training of complex cognitive tasks , 1994 .

[18]  Nils J. Nilsson,et al.  Artificial Intelligence , 1974, IFIP Congress.

[19]  K. Rayner The 35th Sir Frederick Bartlett Lecture: Eye movements and attention in reading, scene perception, and visual search , 2009, Quarterly journal of experimental psychology.

[20]  Mark R. Lehto,et al.  Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .

[21]  Michael A. Goodrich,et al.  On using mixed-initiative control: A perspective for managing large-scale robotic teams , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[22]  Paul R. Havig,et al.  Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent , 2014, HCI.

[23]  L. Ball,et al.  Working Memory, Metacognitive Uncertainty, and Belief Bias in Syllogistic Reasoning , 2000, The Quarterly journal of experimental psychology. A, Human experimental psychology.

[24]  Raja Parasuraman,et al.  The World is not Enough: Trust in Cognitive Agents , 2012 .

[25]  Raja Parasuraman,et al.  Statistical modelling of networked human-automation performance using working memory capacity , 2014, Ergonomics.

[26]  Christopher D. Wickens,et al.  The benefits of imperfect diagnostic automation: a synthesis of the literature , 2007 .

[27]  Mica R. Endsley,et al.  Toward a Theory of Situation Awareness in Dynamic Systems , 1995, Hum. Factors.

[28]  V. Groom,et al.  Can robots be teammates?: Benchmarks in human–robot teams , 2007 .

[29]  Ewart de Visser,et al.  Measurement of trust in human-robot collaboration , 2007, 2007 International Symposium on Collaborative Technologies and Systems.

[30]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[31]  Mary L. Cummings,et al.  Predicting Controller Capacity in Supervisory Control of Multiple UAVs , 2008, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[32]  John D. Lee,et al.  Human-Automation Collaboration in Dynamic Mission Planning: A Challenge Requiring an Ecological Approach , 2006 .

[33]  Maia B. Cook,et al.  Human Factors of the Confirmation Bias in Intelligence Analysis: Decision Support From Graphical Evidence Landscapes , 2008, Hum. Factors.

[34]  J. Hoffman,et al.  The role of visual attention in saccadic eye movements , 1995, Perception & psychophysics.

[35]  L. Gugerty,et al.  Reference-frame misalignment and cardinal direction judgments: group differences and strategies. , 2004, Journal of experimental psychology. Applied.

[36]  Laurel D. Riek,et al.  Wizard of Oz studies in HRI , 2012, J. Hum. Robot Interact..

[37]  Lu Wang,et al.  Trust and Reliance on an Automated Combat Identification System , 2009, Hum. Factors.

[38]  M. Hegarty,et al.  A dissociation between mental rotation and perspective-taking spatial abilities , 2004 .

[39]  Michael F. Bunting,et al.  Working memory span tasks: A methodological review and user’s guide , 2005, Psychonomic bulletin & review.

[40]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[41]  Linden J. Ball,et al.  Eye tracking in HCI and usability research. , 2006 .

[42]  Sheue-Ling Hwang,et al.  Predicting Work Performance in Nuclear Power Plants , 2008 .

[43]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .

[44]  Michael Lewis,et al.  Human Interaction With Multiple Remote Robots , 2013 .