Who's the real expert here? Pedigree's unique bias on trust between human and automated advisers.

OBJECTIVE We assessed the effects of source type bias (human or automation) on adviser trust in a dual adviser decision-making task. BACKGROUND Source type and reliability's effects on adviser trust have been studied in a dual-adviser context, but the influence of pedigree (perceived expertise) across source types lacked robust investigation. As situations with two decision-aids of uneven pedigree can easily arise, it is critical to understand how operators are biased towards a decision-aid of a certain source type and pedigree. METHOD A decision-making task similar to the paradigm of Convoy Leader (Lyons and Stokes, 2012) was given to participants, where a military convoy route had to be selected in the presence of IEDs and insurgent activity. We measured behavioral reliance and trust attitudes. Pedigree was manipulated via controlled adviser descriptions, in a manner consistent with past investigations (Madhavan and Wiegmann, 2007a). RESULTS We found a trust bias towards the human adviser, reversed only when there is a far greater pedigree in the automated adviser. Trust attitudes were also strongly indicative of reliance behaviors. CONCLUSION Pedigree is a strong influencer of trust in a decision-aid and biased towards human advisers. Trust is highly predictive of reliance decisions. APPLICATION System designers must take care with how "expert" automation is portrayed, particularly if it is used in conjunction with other human advisers (e.g.: conflicting advice from air-traffic control and an onboard system).

[1]  N. Harvey,et al.  Taking Advice: Accepting Help, Improving Judgment, and Sharing Responsibility☆☆☆ , 1997 .

[2]  Douglas A. Wiegmann,et al.  Effects of Information Source, Pedigree, and Reliability on Operator Interaction With Decision Support Systems , 2007, Hum. Factors.

[3]  Stephanie M. Merritt Affective Processes in Human–Automation Interactions , 2011, Hum. Factors.

[4]  Roobina Ohanian Construction and Validation of a Scale to Measure Celebrity Endorsers' Perceived Expertise, Trustworthiness, and Attractiveness , 1990 .

[5]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[6]  D. R. Thomas,et al.  Difference Scores From the Point of View of Reliability and Repeated-Measures ANOVA , 2012 .

[7]  L. Cronbach Essentials of psychological testing , 1960 .

[8]  David B. Kaber,et al.  The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task , 2004 .

[9]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[10]  Christopher B. Mayhorn,et al.  Differences in trust between human and automated decision aids , 2016, HotSoS.

[11]  D. Trafimow A defense against the alleged unreliability of difference scores , 2015 .

[12]  Andrew P. Sage Decision support systems engineering , 1991 .

[13]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[14]  Linda G. Pierce,et al.  Predicting Misuse and Disuse of Combat Identification Systems , 2001 .

[15]  Daniel R. Ilgen,et al.  Attitudinal predictors of relative reliance on human vs. automated advisors , 2015 .

[16]  Amar Cheema,et al.  Data collection in a flat world: the strengths and weaknesses of mechanical turk samples , 2013 .

[17]  Joseph B. Lyons,et al.  Human–Human Reliance in the Context of Automation , 2012, Hum. Factors.

[18]  J. H. Davis,et al.  An Integrative Model Of Organizational Trust , 1995 .