Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design

Computer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences.

[1]  N. McGlynn Thinking fast and slow. , 2014, Australian veterinary journal.

[2]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[3]  Michael C. Dorneich,et al.  Towards a Characterization of Adaptive Systems: a Framework for Researchers and System Designers , 2017 .

[4]  Ji Gao,et al.  Extending the decision field theory to model operators' reliance on automation in supervisory control situations , 2006, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[5]  Thomas B. Sheridan,et al.  TRUSTWORTHINESS OF COMMAND AND CONTROL SYSTEMS , 1988 .

[6]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[7]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.

[8]  Joseph B. Lyons,et al.  Trustworthiness and IT Suspicion: An Evaluation of the Nomological Network , 2011, Hum. Factors.

[9]  Santosh Mathan,et al.  Considering Etiquette in the Design of an Adaptive System , 2012 .

[10]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[11]  John D. Lee,et al.  Cooperation in Human-Agent Systems to Support Resilience , 2016, Hum. Factors.

[12]  Christopher A. Miller,et al.  Trust and etiquette in high-criticality automated systems , 2004, CACM.

[13]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[14]  Thomas B Sheridan,et al.  Extending Three Existing Models to Analysis of Trust in Automation: Signal Detection, Statistical Parameter Estimation, and Model-Based Control , 2019, Hum. Factors.

[15]  Christopher A. Miller,et al.  Human-Computer Etiquette: MANAGING EXPECTIONS with INTENTIONAL AGENTS , 2004 .

[16]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[17]  David Woods,et al.  1. How to make automated systems team players , 2002 .

[18]  C. Nass,et al.  Machines and Mindlessness , 2000 .

[19]  Jeffrey M. Bradshaw,et al.  The Dynamics of Trust in Cyberdomains , 2009, IEEE Intelligent Systems.

[20]  N. Moray,et al.  Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. , 1996, Ergonomics.

[21]  R. Nickerson Confirmation Bias: A Ubiquitous Phenomenon in Many Guises , 1998 .

[22]  Thomas B. Sheridan,et al.  Modeling Human?system Interaction: Philosophical and Methodological Considerations, with Examples , 2017 .

[23]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[24]  Ewart J. de Visser,et al.  Politeness in Machine-Human and Human-Human Interaction , 2016 .

[25]  Stephen Vaisey The Righteous Mind , 2012, European Journal of Sociology.