The effect of known decision support reliability on outcome quality and visual information foraging in joint decision making.

Decision support systems (DSSs) are being woven into human workflows from aviation to medicine. In this study, we examine decision quality and visual information foraging for DSSs with different known reliability levels. Thirty-six participants completed a financial fraud detection task, first unsupported and then supported by a DSS which highlighted important information sources. Participants were randomly allocated to four cohorts, being informed that the system's reliability was 100%, 90%, 80% or undisclosed. Results showed that only a DSS known to be 100% reliable resulted in participants systematically following its suggestions, increasing the percentage of correct classifications to a median of 100% while halving both, decision time and number of visually attended information sources. In all other conditions, the DSS had no effect on most visual sampling metrics, while decision quality of the human-DSS team was below the reliability level of the DSS. Knowledge of an even slightly unreliable system hence had a profound impact on joint decision making, with participants trusting their significantly worse performance more than the DSSs suggestions.

[1]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[2]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[3]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[4]  Michael C. Mozer,et al.  Using Highlighting to Train Attentional Expertise , 2016, PloS one.

[5]  P L Miller,et al.  Goal-directed critiquing by computer: ventilator management. , 1985, Computers and biomedical research, an international journal.

[6]  K S Berbaum,et al.  Role of faulty visual search in the satisfaction of search effect in chest radiography. , 1998, Academic radiology.

[7]  Abdul V. Roudsari,et al.  Automation bias: a systematic review of frequency, effect mediators, and mitigators , 2012, J. Am. Medical Informatics Assoc..

[8]  Raja Parasuraman,et al.  Transitioning to Future Air Traffic Management: Effects of Imperfect Automation on Controller Attention and Performance , 2010, Hum. Factors.

[9]  Dietrich Manzey,et al.  Misuse of automated decision aids: Complacency, automation bias and the impact of training experience , 2008, Int. J. Hum. Comput. Stud..

[10]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[11]  D. Corcoran,et al.  Cooperation of listener and computer in a recognition task. II. Effects of computer reliability and "dependent" versus "independent" conditions. , 1972, The Journal of the Acoustical Society of America.

[12]  F. Paas,et al.  Attention guidance in learning from a complex animation: Seeing is understanding? , 2010 .

[13]  Linda G. Pierce,et al.  The Perceived Utility of Human and Automated Aids in a Visual Detection Task , 2002, Hum. Factors.

[14]  Joachim Meyer,et al.  Effects of cues on target search behavior. , 2015, Journal of experimental psychology. Applied.

[15]  George M. Kasper,et al.  The design of joint cognitive systems: the effect of cognitive coupling on performance , 1994, Int. J. Hum. Comput. Stud..

[16]  Ann M. Bisantz,et al.  The impact of cognitive feedback on judgment performance and trust with decision aids , 2008 .

[17]  J P Bliss,et al.  Human probability matching behaviour in response to alarms of varying reliability. , 1995, Ergonomics.

[18]  Cees J. H. Midden,et al.  The effects of errors on system trust, self-confidence, and the allocation of control in route planning , 2003, Int. J. Hum. Comput. Stud..

[19]  Chris Baber,et al.  The effect of four user interface concepts on visual scan pattern similarity and information foraging in a complex decision making task. , 2018, Applied ergonomics.

[20]  K. Mosier,et al.  Human Decision Makers and Automated Decision Aids: Made for Each Other? , 1996 .

[21]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[22]  R I Thackray,et al.  Detection efficiency on an air traffic control monitoring task with and without computer aiding. , 1989, Aviation, space, and environmental medicine.

[23]  Peter E. Latham,et al.  Does interaction matter? Testing whether a confidence heuristic can replace interaction in collective decision-making , 2014, Consciousness and Cognition.

[24]  Raja Parasuraman,et al.  Automation in Future Air Traffic Management: Effects of Decision Aid Reliability on Controller Performance and Mental Workload , 2005, Hum. Factors.

[25]  Joachim Meyer,et al.  Effects of Warning Validity and Proximity on Responses to Warnings , 2001, Hum. Factors.

[26]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[27]  Kathleen L. Mosier,et al.  Does automation bias decision-making? , 1999, Int. J. Hum. Comput. Stud..

[28]  Christopher D. Wickens,et al.  Head Up versus Head Down: The Costs of Imprecision, Unreliability, and Visual Clutter on Cue Effectiveness for Display Signaling , 2003, Hum. Factors.

[29]  Chris Baber,et al.  Joint Human-Automation Decision Making in Road Traffic Management , 2017 .

[30]  Philip M. J. Reckers,et al.  The Effects of Decision‐Aid Use and Reliability on Jurors' Evaluations of Auditor Liability , 2002 .

[31]  David Shinar,et al.  New Alternative Methods of Analyzing Human Behavior in Cued Target Acquisition , 2003, Hum. Factors.

[32]  P. Latham,et al.  References and Notes Supporting Online Material Materials and Methods Figs. S1 to S11 References Movie S1 Optimally Interacting Minds R�ports , 2022 .

[33]  Joachim Meyer,et al.  Mental effort in binary categorization aided by binary cues. , 2013, Journal of experimental psychology. Applied.

[34]  Kathleen L. Mosier,et al.  Accountability and automation bias , 2000, Int. J. Hum. Comput. Stud..

[35]  Bonnie M. Muir,et al.  Trust Between Humans and Machines, and the Design of Decision Aids , 1987, Int. J. Man Mach. Stud..

[36]  J. Henderson Human gaze control during real-world scene perception , 2003, Trends in Cognitive Sciences.

[37]  Jacob M. Rose,et al.  The Effects of Fraud Risk Assessments and a Risk Analysis Decision Aid on Auditors' Evaluation of Evidence and Judgment , 2003 .

[38]  Alexander Artikis,et al.  Drilling Into Dashboards: Responding to Computer Recommendation in Fraud Analysis , 2019, IEEE Transactions on Human-Machine Systems.

[39]  Abdul V. Roudsari,et al.  Automation bias: Empirical results assessing influencing factors , 2014, Int. J. Medical Informatics.

[40]  Douglas A. Wiegmann,et al.  Effects of Information Source, Pedigree, and Reliability on Operator Interaction With Decision Support Systems , 2007, Hum. Factors.

[41]  James P. Bliss,et al.  Investigation of Alarm-Related Accidents and Incidents in Aviation , 2003 .

[42]  D. Ballard,et al.  Eye movements in natural behavior , 2005, Trends in Cognitive Sciences.

[43]  David D. Woods,et al.  Systems with Human Monitors: A Signal Detection Analysis , 1985, Hum. Comput. Interact..

[44]  Kathleen L. Mosier,et al.  Automation Use and Automation Bias , 1999 .

[45]  Dietrich Manzey,et al.  Operators' adaptation to imperfect automation - Impact of miss-prone alarm systems on attention allocation and performance , 2014, Int. J. Hum. Comput. Stud..

[46]  Gianluca Bontempi,et al.  Learned lessons in credit card fraud detection from a practitioner perspective , 2014, Expert Syst. Appl..

[47]  Joachim Meyer,et al.  Use of Warnings in an Attentionally Demanding Detection Task , 2001, Hum. Factors.

[48]  Stephanie Guerlain,et al.  Interactive Critiquing as a Form of Decision Support: An Empirical Evaluation , 1999, Hum. Factors.