Automation bias: decision making and performance in high-tech cockpits.

Automated aids and decision support tools are rapidly becoming indispensable tools in high-technology cockpits and are assuming increasing control of"cognitive" flight tasks, such as calculating fuel-efficient routes, navigating, or detecting and diagnosing system malfunctions and abnormalities. This study was designed to investigate automation bias, a recently documented factor in the use of automated aids and decision support systems. The term refers to omission and commission errors resulting from the use of automated cues as a heuristic replacement for vigilant information seeking and processing. Glass-cockpit pilots flew flight scenarios involving automation events or opportunities for automation-related omission and commission errors. Although experimentally manipulated accountability demands did not significantly impact performance, post hoc analyses revealed that those pilots who reported an internalized perception of "accountability" for their performance and strategies of interaction with the automation were significantly more likely to double-check automated functioning against other cues and less likely to commit errors than those who did not share this perception. Pilots were also lilkely to erroneously "remember" the presence of expected cues when describing their decision-making processes.

[1]  Berndt Brehmer,et al.  Does having to justify one's judgments change the nature of the judgment process? , 1983 .

[2]  Shelley E. Taylor,et al.  Social cognition, 2nd ed. , 1991 .

[3]  Itamar Simonson,et al.  The effect of accountability on susceptibility to decision errors , 1992 .

[4]  J. Darley,et al.  A hypothesis-confirming bias in labeling effects. , 1983 .

[5]  John M. Flach,et al.  5 – Information Processing , 1988 .

[6]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[7]  Cynthia Nelson,et al.  What mediates sex discrimination in hiring decisions , 1988 .

[8]  J. I. Kim,et al.  Accountability and judgment processes in a personality prediction task. , 1987, Journal of personality and social psychology.

[9]  Charles E. Billings,et al.  Human-Centered Aviation Automation: Principles and Guidelines , 1996 .

[10]  D. Hamilton Cognitive Processes in Stereotyping and Intergroup Behavior , 1981 .

[11]  P. Tetlock Accountability and the perseverance of first impressions. , 1983 .

[12]  Marilyn Bohl,et al.  Information processing , 1971 .

[13]  Kathleen L. Mosier,et al.  Electronic Checklists: Implications for Decision Making , 1992 .

[14]  G. Gigerenzer,et al.  Probabilistic mental models: a Brunswikian theory of confidence. , 1991, Psychological review.

[15]  David C. Nagel,et al.  Human factors in aviation , 1988 .

[16]  P. Tetlock Accountability: A social check on the fundamental attribution error. , 1985 .

[17]  Philip J. Smith,et al.  Design of a Cooperative Problem-Solving System for En-Route Flight Planning: An Empirical Evaluation , 1994 .

[18]  Kathleen L. Mosier,et al.  The ameliorating effects of accountability on automation bias , 1996, Proceedings Third Annual Symposium on Human Interaction with Complex Systems. HICS'96.

[19]  K. Mosier,et al.  Human Decision Makers and Automated Decision Aids: Made for Each Other? , 1996 .

[20]  David D. Woods,et al.  Cognitive engineering in aerospace application: Pilot interaction with cockpit automation , 1993 .

[21]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[22]  A. Kruglanski,et al.  The freezing and unfreezing of lay-inferences: Effects on impressional primacy, ethnic stereotyping, and numerical anchoring ☆ , 1983 .

[23]  Adrian F. Ashman,et al.  Aeronautical decision-making , 1986 .