The benefits of imperfect diagnostic automation: a synthesis of the literature

This review of the literature examines, in a quantitative fashion, how the level of imperfection or unreliability of diagnostic automation affects the performance of the human operator who is jointly consulting that automation and the raw data itself. The data from 20 different studies were used to generate 35 different data points that compared performance with varying levels of unreliability, with that of a non-automated baseline condition. A regression analysis of benefits/costs relative to baseline was carried out, and revealed a strong linear function of benefits with reliability. The analysis revealed that a reliability of 0.70 was the ‘crossover point’ below which unreliable automation was worse than no automation at all. The analysis also revealed that performance was more strongly affected by reliability in high workload conditions, implicating the role of workload-imposed automation dependence in producing this relationship, and suggesting that humans tend to protect performance of concurrent tasks from imperfection of diagnostic automation.

[1]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[2]  Per Capita,et al.  About the authors , 1995, Machine Vision and Applications.

[3]  David Shinar,et al.  Imperfect In-Vehicle Collision Avoidance Warning Systems Can Aid Drivers , 2004, Hum. Factors.

[4]  Joachim Meyer,et al.  Use of Warnings in an Attentionally Demanding Detection Task , 2001, Hum. Factors.

[5]  Esa M. Rantanen,et al.  Imperfect Automation in Aviation Traffic Alerts: A Review of Conflict Detection Algorithms and Their Implications for Human Factors Research , 2003 .

[6]  Joachim Meyer,et al.  Effects of Warning Validity and Proximity on Responses to Warnings , 2001, Hum. Factors.

[7]  Juliana Goh,et al.  Pilot Dependence on Imperfect Diagnostic Automation in Simulated UAV Flights: An Attentional Visual Scanning Analysis. , 2005 .

[8]  Christopher D. Wickens,et al.  Effects of Age on Utilization and Perceived Reliability of an Automated Decision-Making aid for Luggage Screening , 2003 .

[9]  Christopher D. Wickens,et al.  Workload and Reliability of Predictor Displays in Aircraft Traffic Avoidance , 2000 .

[10]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[11]  John A. Swets,et al.  System operator response to warnings of danger: A laboratory investigation of the effects of the predictive value of a warning on human response time. , 1995 .

[12]  Stephen R. Dixon,et al.  Reliability in Automated Aids for Unmanned Aerial Vehicle Flight Control : Evaluating a Model of Automation Dependence in High Workload , 2004 .

[13]  Thomas A. Dingus,et al.  Human Factors Field Evaluation of Automotive Headway Maintenance/Collision Warning Devices , 1997, Hum. Factors.

[14]  Christopher D. Wickens,et al.  Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance Model of Automation Dependence in High Workload , 2006, Hum. Factors.

[15]  Douglas A. Wiegmann,et al.  Automation Failures on Tasks Easily Performed by Operators Undermines Trust in Automated Aids , 2003 .

[16]  David Shinar,et al.  Effects of an In-Vehicle Collision Avoidance Warning System on Short- and Long-Term Driving Performance , 2002, Hum. Factors.

[17]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[18]  David Shinar,et al.  New Alternative Methods of Analyzing Human Behavior in Cued Target Acquisition , 2003, Hum. Factors.

[19]  Christopher D. Wickens,et al.  IMPERFECT AND UNRELIABLE AUTOMATION AND ITS IMPLICATIONS FOR ATTENTION ALLOCATION , INFORMATION ACCESS AND SITUATION AWARENESS , 2000 .

[20]  David D. Woods,et al.  Systems with Human Monitors: A Signal Detection Analysis , 1985, Hum. Comput. Interact..

[21]  Daniel I. Manes,et al.  Making Unreliable Automation Useful , 2002 .

[22]  Christopher D Wickens,et al.  Effects of conflict alerting system reliability and task difficulty on pilots' conflict detection with cockpit display of traffic information , 2007, Ergonomics.

[23]  Mark R. Lehto,et al.  An experimental comparison of conservative versus optimal collision avoidance warning system thresholds , 2000 .

[24]  Kathleen L. Mosier,et al.  Does automation bias decision-making? , 1999, Int. J. Hum. Comput. Stud..

[25]  Christopher D. Wickens,et al.  Head Up versus Head Down: The Costs of Imprecision, Unreliability, and Visual Clutter on Cue Effectiveness for Display Signaling , 2003, Hum. Factors.

[26]  Raja Parasuraman,et al.  Monitoring an Automated System for a Single Failure: Vigilance and Task Complexity Effects , 1996, Hum. Factors.

[27]  Christopher D. Wickens,et al.  Mission Control of Multiple Unmanned Aerial Vehicles: A Workload Analysis , 2005, Hum. Factors.

[28]  Raja Parasuraman,et al.  Effects of Automated Cueing on Decision Implementation in a Visual Search Task , 2001 .

[29]  Alex M. Andrew,et al.  Humans and Automation: System Design and Research Issues , 2003 .

[30]  Christopher D. Wickens,et al.  Display Signaling in Augmented Reality: Effects of Cue Reliability and Image Realism on Attention Allocation and Trust Calibration , 2001, Hum. Factors.

[31]  Raja Parasuraman,et al.  Performance Consequences of Automation-Induced 'Complacency' , 1993 .

[32]  James P Bliss,et al.  Alarm mistrust in automobiles: how collision alarm reliability affects driving. , 2003, Applied ergonomics.

[33]  Raja Parasuraman,et al.  Automation in Future Air Traffic Management: Effects of Decision Aid Reliability on Controller Performance and Mental Workload , 2005, Hum. Factors.

[34]  G. A. Miller THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .

[35]  Linda G. Pierce,et al.  Automation Reliance on a Combat Identification System , 2001 .