Extending Three Existing Models to Analysis of Trust in Automation: Signal Detection, Statistical Parameter Estimation, and Model-Based Control

Objective: The objective is to propose three quantitative models of trust in automation. Background: Current trust-in-automation literature includes various definitions and frameworks, which are reviewed. Method: This research shows how three existing models, namely those for signal detection, statistical parameter estimation calibration, and internal model-based control, can be revised and reinterpreted to apply to trust in automation useful for human–system interaction design. Results: The resulting reinterpretation is presented quantitatively and graphically, and the measures for trust and trust calibration are discussed, along with examples of application. Conclusion: The resulting models can be applied to provide quantitative trust measures in future experiments or system designs. Applications: Simple examples are provided to explain how model application works for the three trust contexts that correspond to signal detection, parameter estimation calibration, and model-based open-loop control.

[1]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .

[2]  T. B. Sheridan On Modeling Performance of Open-Loop Mechanisms , 1972 .

[3]  Thomas B. Sheridan,et al.  Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design , 2019, Front. Psychol..

[4]  James P. Bliss,et al.  The Role of Trust as a Mediator Between System Characteristics and Response Behaviors , 2015, Hum. Factors.

[5]  Robert R. Hoffman,et al.  A Taxonomy of Emergent Trusting in the Human–Machine Relationship , 2017 .

[6]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[7]  Rino Falcone,et al.  Trust Theory: A Socio-Cognitive and Computational Model , 2010 .

[8]  William R. Ferrell,et al.  Human Operator Decision-Making in Manual Control , 1969 .

[9]  N. Moray,et al.  Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. , 1996, Ergonomics.

[10]  Thomas B. Sheridan,et al.  Filtering information from human experts , 1989, IEEE Trans. Syst. Man Cybern..

[11]  Jeffrey M. Bradshaw,et al.  The Dynamics of Trust in Cyberdomains , 2009, IEEE Intelligent Systems.

[12]  Christopher D. Wickens,et al.  Display Signaling in Augmented Reality: Effects of Cue Reliability and Image Realism on Attention Allocation and Trust Calibration , 2001, Hum. Factors.

[13]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[14]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[15]  William R. Ferrell,et al.  Remote manipulation with transmission delay. , 1965 .

[16]  Thomas B. Sheridan,et al.  TRUSTWORTHINESS OF COMMAND AND CONTROL SYSTEMS , 1988 .

[17]  David L. Kleinman,et al.  A control theoretic approach to manned-vehicle systems analysis , 1971 .

[18]  Jessie Y. C. Chen,et al.  A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction , 2011, Hum. Factors.

[19]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[20]  C. Starr Social benefit versus technological risk. , 1969, Science.

[21]  Joseph B. Lyons,et al.  Trustworthiness and IT Suspicion: An Evaluation of the Nomological Network , 2011, Hum. Factors.

[22]  Ji Gao,et al.  Extending the decision field theory to model operators' reliance on automation in supervisory control situations , 2006, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[23]  George R. S. Weir,et al.  ON MAN-MACHINE SYSTEMS , 2007 .