Trust in Autonomous Systems for Threat Analysis: A Simulation Methodology

Human operators will increasingly team with autonomous systems in military and security settings, for example, evaluation and analysis of threats. Determining whether humans are threatening is a particular challenge to which future autonomous systems may contribute. Optimal trust calibration is critical for mission success, but most trust research has addressed conventional automated systems of limited intelligence. This article identifies multiple factors that may influence trust in autonomous systems. Trust may be undermined by various sources of demand and uncertainty. These include the cognitive demands resulting from the complexity and unpredictability of the system, “social” demands resulting from the system’s capacity to function as a team-member, and self-regulative demands associated with perceived threats to personal competence. It is proposed that existing gaps in trust research may be addressed using simulation methodologies. A simulated environment developed by the research team is described. It represents a “town-clearing” task in which the human operator teams with a robot that can be equipped with various sensors, and software for intelligent analysis of sensor data. The functionality of the simulator is illustrated, together with future research directions.

[1]  Florian Jentsch,et al.  Enhancing the effectiveness of human-robot teaming with a closed-loop system. , 2018, Applied ergonomics.

[2]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[3]  Florian Jentsch,et al.  From Tools to Teammates , 2011 .

[4]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[5]  James L. Szalma,et al.  A Meta-Analysis of Factors Influencing the Development of Trust in Automation , 2016, Hum. Factors.

[6]  Daniel B. Horn,et al.  Are Soldiers Gamers? Videogame Usage among Soldiers and Implications for the Effective Use of Serious Videogames for Military Training , 2010 .

[7]  Florian Jentsch,et al.  Building Appropriate Trust in Human-Robot Teams , 2013, AAAI Spring Symposium: Trust and Autonomous Systems.

[8]  Lauren Reinerman-Jones,et al.  Developing an Insider Threat Training Environment , 2016 .

[9]  J. Cacioppo,et al.  Who Sees Human? , 2010, Perspectives on psychological science : a journal of the Association for Psychological Science.

[10]  Lauren Reinerman-Jones,et al.  Metrics for individual differences in EEG response to cognitive workload: Optimizing performance prediction , 2017 .

[11]  G. Vachtsevanos,et al.  Future of Unmanned Aviation , 2014 .

[12]  Jessie Y. C. Chen,et al.  Human–Agent Teaming for Multirobot Control: A Review of Human Factors Issues , 2014, IEEE Transactions on Human-Machine Systems.

[13]  Roger Giner-Sorolla,et al.  Affect in attitude: Immediate and deliberative perspectives. , 1999 .

[14]  J. Lin,et al.  Resilient autonomous systems: Challenges and solutions , 2016, 2016 Resilience Week (RWS).

[15]  Lauren Reinerman-Jones,et al.  Eye Tracking Metrics for Insider Threat Detection in a Simulated Work Environment , 2017 .

[16]  Joseph B. Lyons Being Transparent about Transparency , 2013 .

[17]  Dana E. Sims,et al.  Is there a “Big Five” in Teamwork? , 2005 .

[18]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[19]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.