The description-experience gap in the effect of warning reliability on user trust and performance in a phishing-detection context

Abstract How the human user trusts and interacts with an automation system is influenced by how well the system capabilities are conveyed to the user. When interacting with the automation system, the user can obtain the system reliability information through an explicit description of the reliability or through experiencing the system over time. The term description-experience gap illustrates the difference between description-based and experience-based human decisions. In the current study, we investigated how this description-experience gap applies to human-automation interaction with a phishing-detection task in the cyber domain. In two experiments, participants’ performance in detecting phishing emails and their trust in the phishing detection system were measured when system reliability, description, and experience (i.e., feedback) were varied systematically in easy and difficult phishing detection tasks. The results suggested that system reliability had a profound influence on human performance in the system, but the benefits of having a more reliable system may depend on task difficulty. Also, providing feedback increased trust calibration in terms of both objective and subjective trust measures, yet providing description of system reliability increased only subjective trust. This result pattern not only shows the gap in the effects of feedback and description, but it also extends the description-experience gap concept from rare events to common events.

[1]  Thomas T. Hills,et al.  Online product reviews and the description-experience gap , 2015 .

[2]  Craig R. Fox,et al.  Information asymmetry in decision from description versus decision from experience , 2009, Judgment and Decision Making.

[3]  Julia Spaniol,et al.  Experienced Probabilities Increase Understanding of Diagnostic Test Results in Younger and Older Adults , 2017, Medical decision making : an international journal of the Society for Medical Decision Making.

[4]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[5]  R. Hertwig,et al.  Decisions from Experience and the Effect of Rare Events in Risky Choice , 2004, Psychological science.

[6]  Kenneth Littlejohn,et al.  Accounting for the human in cyberspace: Effects of mood on trust in automation , 2010, 2010 International Symposium on Collaborative Technologies and Systems.

[7]  V. Cavallo,et al.  Familiarization with a Forward Collision Warning on driving simulator: cost and benefit on driversystem interactions and trust , 2010 .

[8]  Josef F. Krems,et al.  Prior Familiarization With Takeover Requests Affects Drivers’ Takeover Performance and Automation Trust , 2017, Hum. Factors.

[9]  R. Hertwig,et al.  The description–experience gap in risky choice , 2009, Trends in Cognitive Sciences.

[10]  Ryan K. Jessup,et al.  Feedback Produces Divergence From Prospect Theory in Descriptive Choice , 2008, Psychological science.

[11]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[12]  S Lewandowsky,et al.  The dynamics of trust: comparing humans to automation. , 2000, Journal of experimental psychology. Applied.

[13]  Min Wu,et al.  Do security toolbars actually prevent phishing attacks? , 2006, CHI.

[14]  Sarah R. Heilbronner,et al.  The description-experience gap in risky choice in nonhuman primates , 2016, Psychonomic bulletin & review.

[15]  Jaap Ham,et al.  Trusting a Virtual Driver That Looks, Acts, and Thinks Like You , 2015, Hum. Factors.

[16]  A. Fisk,et al.  Reliability and Age-Related Effects on Trust and Reliance of a Decision Support Aid , 2004 .

[17]  Jason Hong,et al.  The state of phishing attacks , 2012, Commun. ACM.

[18]  Dirk U. Wulff,et al.  A meta-analytic review of two modes of learning and the description-experience gap. , 2017, Psychological bulletin.

[19]  R. Hertwig,et al.  Decisions from Experience: From Monetary to Medical Gambles , 2016 .

[20]  Lorrie Faith Cranor,et al.  Teaching Johnny not to fall for phish , 2010, TOIT.

[21]  Holly A. Yanco,et al.  Impact of robot failures and feedback on real-time trust , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[22]  Florian Jentsch,et al.  Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems , 2014, Defense + Security Symposium.

[23]  Eric N. Wiebe,et al.  The Effects of Automated Decision Algorithm Modality and Transparency on Reported Trust and Task Performance , 2008 .

[24]  Stephen Rice Examining Single- and Multiple-Process Theories of Trust in Automation , 2009, The Journal of general psychology.

[25]  Michael A. Rupp,et al.  Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.

[26]  Holly A. H. Handley,et al.  Trust and the Compliance–Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence , 2017, Hum. Factors.

[27]  Colin G. Drury,et al.  Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .

[28]  Nadine B. Sarter,et al.  Supporting Trust Calibration and the Effective Use of Decision Aids by Presenting Dynamic System Confidence Information , 2006, Hum. Factors.

[29]  Lorrie Faith Cranor,et al.  You've been warned: an empirical study of the effectiveness of web browser phishing warnings , 2008, CHI.

[30]  Raja Parasuraman,et al.  Humans and Automation: Use, Misuse, Disuse, Abuse , 1997, Hum. Factors.

[31]  Christopher D. Wickens,et al.  Automation Reliability in Unmanned Aerial Vehicle Control: A Reliance-Compliance Model of Automation Dependence in High Workload , 2006, Hum. Factors.

[32]  James L. Szalma,et al.  A Meta-Analysis of Factors Influencing the Development of Trust in Automation , 2016, Hum. Factors.

[33]  Raja Parasuraman,et al.  Human-Automation Interaction , 2005 .

[34]  John D. Lee,et al.  Trust, self-confidence, and operators' adaptation to automation , 1994, Int. J. Hum. Comput. Stud..

[35]  Ninghui Li,et al.  Use of Phishing Training to Improve Security Warning Compliance: Evidence from a Field Experiment , 2017, HotSoS.

[36]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[37]  Bonnie M. Muir,et al.  Trust Between Humans and Machines, and the Design of Decision Aids , 1987, Int. J. Man Mach. Stud..

[38]  N. Chater,et al.  Are Probabilities Overweighted or Underweighted When Rare Outcomes Are Experienced (Rarely)? , 2009, Psychological science.

[39]  Masooda Bashir,et al.  Trust in Automation , 2015, Hum. Factors.

[40]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[41]  Wendy A. Rogers,et al.  The Effect of Incorrect Reliability Information on Expectations, Perceptions, and Use of Automation , 2016, Hum. Factors.

[42]  Munindar P. Singh Trust as dependence: a logical approach , 2011, AAMAS.

[43]  James L. Szalma,et al.  The Effect of Automation Reliability on User Automation Trust and Reliance in a Search-and-Rescue Scenario , 2008 .

[44]  Douglas A. Wiegmann,et al.  Automation Failures on Tasks Easily Performed by Operators Undermines Trust in Automated Aids , 2003 .

[45]  Dirk Van Rooy,et al.  Trust and privacy in the future internet—a research perspective , 2010 .

[46]  Jinwoo Kim,et al.  Designing towards emotional usability in customer interfaces trustworthiness of cyber-banking system interfaces , 1998, Interact. Comput..