Why Human-Autonomy Teaming?

Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. This oversight has become harder as automation has become more complicated. To resolve this problem, Human-Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a method for giving insight into the reasoning behind automated recommendations and actions, along with advances in human automation communications (e.g., voice). These, in turn, permit more trust in the automation when appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.

[1]  Jessie Y. Chen C.,et al.  Human-Agent Teaming for Multi-Robot Control:A Literature Review , 2013 .

[2]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[3]  Mica R. Endsley,et al.  From Here to Autonomy , 2017, Hum. Factors.

[4]  Donald L. Fisher,et al.  SPIDER: A Framework for Understanding Driver Distraction , 2016, Hum. Factors.

[5]  Joseph B. Lyons,et al.  Being Transparent about Transparency: A Model for Human-Robot Interaction , 2013, AAAI Spring Symposium: Trust and Autonomous Systems.

[6]  David Woods,et al.  1. How to make automated systems team players , 2002 .

[7]  John D. Lee,et al.  Review of a Pivotal Human Factors Article: “Humans and Automation: Use, Misuse, Disuse, Abuse” , 2008, Hum. Factors.

[8]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[9]  Monica N. Lees,et al.  The influence of distraction and driving context on driver response to imperfect collision warning systems , 2007, Ergonomics.

[10]  Charles E. Billings,et al.  Human-Centered Aviation Automation: Principles and Guidelines , 1996 .

[11]  Richard E. Hayes,et al.  Understanding Information Age Warfare , 2001 .

[12]  Jessie Y. C. Chen,et al.  Supervisory Control of Multiple Robots: Human-Performance Issues and User-Interface Design , 2011, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[13]  Josef F. Krems,et al.  Keep Your Scanners Peeled , 2016, Hum. Factors.

[14]  David E. Smith,et al.  Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner , 2016, 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC).

[15]  David E. Smith,et al.  Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines , 2017 .

[16]  Raja Parasuraman,et al.  Designing for Flexible Interaction Between Humans and Automation: Delegation Interfaces for Supervisory Control , 2007, Hum. Factors.

[17]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.