Are Well-Calibrated Users Effective Users? Associations Between Calibration of Trust and Performance on an Automation-Aided Task

Objective: We present alternative operationalizations of trust calibration and examine their associations with predictors and outcomes. Background: It is thought that trust calibration (correspondence between aid reliability and user trust in the aid) is a key to effective human–automation performance. We propose that calibration can be operationalized in three ways. Perceptual accuracy is the extent to which the user perceives the aid’s reliability accurately at one point in time. Perceptual sensitivity and trust sensitivity reflect user adjustment of perceived reliability and trust as the aid’s actual reliability changes over time. Method: One hundred fifty-five students completed an X-ray screening task with an automated screener. Awareness of the aid’s accuracy trajectory and error type was examined as predictors, and task performance and aid failure detection were examined as outcomes. Results: Awareness of accuracy trajectory was significantly associated with all three operationalizations of calibration, but awareness of error type was not when considered in conjunction with accuracy trajectory. Contrary to expectations, only perceptual accuracy was significantly associated with task performance and failure detection, and combined, the three operationalizations accounted for only 9% and 4% of the variance in these outcomes, respectively. Conclusion: Our results suggest that the potential importance of trust calibration warrants further examination. Moderators may exist. Application: Users who were better able to perform the task unaided were better able to identify and correct aid failure, suggesting that user task training and expertise may benefit human–automation performance.

[1]  Christopher D. Wickens,et al.  Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance Model of Automation Dependence in High Workload , 2006, Hum. Factors.

[2]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[3]  Douglas A. Wiegmann,et al.  Agreeing with Automated Diagnostic Aids: A Study of Users' Concurrence Strategies , 2002, Hum. Factors.

[4]  Nadine B. Sarter,et al.  Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information , 2006, Hum. Factors.

[5]  C. Burt THE PSYCHOLOGY OF LEARNING , 1958 .

[6]  Stephen Rice Examining Single- and Multiple-Process Theories of Trust in Automation , 2009, The Journal of general psychology.

[7]  Christopher A. Miller,et al.  Trust and etiquette in high-criticality automated systems , 2004, CACM.

[8]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[9]  R. Vandenberg,et al.  A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research , 2000 .

[10]  Christopher D. Wickens,et al.  False Alerts in Air Traffic Control Conflict Alerting System: Is There a “Cry Wolf” Effect? , 2009, Hum. Factors.

[11]  Christopher D. Wickens,et al.  On the Independence of Compliance and Reliance: Are Automation False Alarms Worse Than Misses? , 2007, Hum. Factors.

[12]  Deborah Lee,et al.  I Trust It, but I Don’t Know Why , 2013, Hum. Factors.

[13]  Linda G. Pierce,et al.  Automation Usage Decisions: Controlling Intent and Appraisal Errors in a Target Detection Task , 2007, Hum. Factors.

[14]  J. Deese The psychology of learning , 1952 .

[15]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[16]  Douglas A. Wiegmann,et al.  Effects of feedback lag variability on the choice of an automated diagnostic aid: A preliminary predictive model , 2000 .

[17]  A. M. Rich,et al.  Automated diagnostic aids: The effects of aid reliability on users' trust and reliance , 2001 .

[18]  Harvey S. Smallman,et al.  Heuristic Automation for Decluttering Tactical Displays , 2005, Hum. Factors.

[19]  Daniel R. Ilgen,et al.  Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions , 2008, Hum. Factors.

[20]  Stephanie M. Merritt Affective Processes in Human–Automation Interactions , 2011, Hum. Factors.

[21]  Jeffrey R. Edwards,et al.  On the Use of Polynomial Regression Equations As An Alternative to Difference Scores in Organizational Research , 1993 .