Paradigm Development for Identifying and Validating Indicators of Trust in Automation in the Operational Environment of Human Automation Integration

Calibrated trust in an automation is a key factor supporting full integration of the human user into human automation integrated systems. True integration is a requirement if system performance is to meet expectations. Trust in automation TiA has been studied using surveys, but thus far no valid, objective indicators of TiA exist. Further, these studies have been conducted in tightly controlled laboratory environments and therefore do not necessarily translate into real world applications that might improve joint system performance. Through a literature review, constraints on an operational paradigm aimed at developing indicators of TiA were established. Our goal in this paper was to develop an operational paradigm designed to develop valid TiA indicators using methods from human factors and cognitive neuroscience. The operational environment chosen was driving automation because most adults are familiar with the task and its consequent structure and therefore required little training. Initial behavioral and survey data confirm that the design constraints were met. We therefore believe that our paradigm provides a valid means of performing operational experiments aimed at further understanding TiA and its psychophysiological underpinnings.

[1]  Randy Borum,et al.  The Science of Interpersonal Trust , 2010 .

[2]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[3]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[4]  Richard Dewhurst,et al.  Using eye-tracking to trace a cognitive process: Gaze behavior during decision making in a natural environment , 2013 .

[5]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[6]  Yang Wang,et al.  Measurable Decision Making with GSR and Pupillary Analysis for Intelligent User Interface , 2015, ACM Trans. Comput. Hum. Interact..

[7]  Linda G. Pierce,et al.  Misuse and Disuse of Automated AIDS , 1999 .

[8]  Eyal M. Reingold,et al.  Eye Movement Monitoring as a Process Tracing Methodology in Decision Making Research , 2011 .

[9]  F. Freeman,et al.  A Closed-Loop System for Examining Psychophysiological Measures for Adaptive Task Allocation , 2000, The International journal of aviation psychology.

[10]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[11]  Daniel R. Ilgen,et al.  Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions , 2008, Hum. Factors.

[12]  D. Pellerin,et al.  Different types of sounds influence gaze differently in videos , 2013 .

[13]  Mary L. Cummings,et al.  Boredom and Distraction in Multiple Unmanned Vehicle Supervisory Control , 2013, Interact. Comput..

[14]  Ulrike Basten,et al.  How the brain integrates costs and bene fi ts during decision making , 2010 .

[15]  Susan Wiedenbeck,et al.  On-line trust: concepts, evolving themes, a model , 2003, Int. J. Hum. Comput. Stud..

[16]  M. Narasimha Murty,et al.  A stochastic connectionist approach for global optimization with application to pattern clustering , 2000, IEEE Trans. Syst. Man Cybern. Part B.

[17]  N. Moray,et al.  Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. , 1996, Ergonomics.

[18]  Christopher D. Wickens,et al.  The benefits of imperfect diagnostic automation: a synthesis of the literature , 2007 .

[19]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[20]  D. Everhart,et al.  Brain function with complex decision making using electroencephalography. , 2011, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.