Alerting about possible risks vs. blocking risky choices: A quantitative model and its empirical evaluation

Abstract Alerting users about possible threats or blocking users’ ability to perform potentially dangerous actions are two common ways to protect systems from the adverse effects of threats, such as malicious email attachments, fraudulent requests, or system malfunctions. We present a normative model of the effects of alerting and blocking on the value of the outcomes, on measures of risk-taking, and on the number of successful attacks. We compared warning and blocking systems and binary- and likelihood-alarm systems as a function of properties of the threats and the security system. We also compared model predictions to actual user behavior, as measured in a controlled experiment. The experimental results were generally in line with the normative model. However, the model predicted that the outcomes from blocking would always be worse or equal to those from warnings. The experiment, however, showed that blocking may have an advantage over warnings, because it leads to fewer undetected events (as predicted by the model), without significantly lowering the mean value of outcomes (the model predicts a lower value). We discuss practical implications regarding the use of blocking and alerting and the more general value of combining optimal decision models and empirical experiments for determining system designs.

[1]  Sonia Chiasson,et al.  Why phishing still works: User strategies for combating phishing attacks , 2015, Int. J. Hum. Comput. Stud..

[2]  Mark S. Shurtleff Effects of specificity of probability information on human performance in a signal detection task , 1991 .

[3]  Dietrich Manzey,et al.  Supporting Attention Allocation in Multitask Environments , 2014, Hum. Factors.

[4]  Joachim Meyer,et al.  Conceptual Issues in the Study of Dynamic Hazard Warnings , 2004, Hum. Factors.

[5]  Neil A. Macmillan,et al.  Detection Theory: A User's Guide , 1991 .

[6]  James P. Bliss,et al.  Investigation of Alarm-Related Accidents and Incidents in Aviation , 2003 .

[7]  Ross J. Anderson,et al.  Reading this may harm your computer: The psychology of malware warnings , 2014, Comput. Hum. Behav..

[8]  Merrill Warkentin,et al.  Risk Homeostasis in Information Security: Challenges in Confirming Existence and Verifying Impact , 2017, NSPW.

[9]  Cleotilde Gonzalez,et al.  Understanding Cyber Situational Awareness in a Cyber Security Game involving , 2018, Int. J. Cyber Situational Aware..

[10]  Marianne Guffanti,et al.  U.S. Geological Survey's Alert Notification System for Volcanic Activity , 2006 .

[11]  DAVID T. LEVYM,et al.  Review: Risk Compensation Literature — The Theory and Evidence , 2000 .

[12]  Lorrie Faith Cranor,et al.  Bridging the Gap in Computer Security Warnings: A Mental Model Approach , 2011, IEEE Security & Privacy.

[13]  Tonya L Smith-Jackson,et al.  Research-based guidelines for warning design and evaluation. , 2002, Applied ergonomics.

[14]  Michael S. Wogalter,et al.  Communication-Human Information Processing (C-HIP) Model , 2018, Forensic Human Factors and Ergonomics.

[15]  Lorrie Faith Cranor,et al.  A Framework for Reasoning About the Human in the Loop , 2008, UPSEC.

[16]  Barry H. Kantowitz,et al.  Likelihood Alarm Displays , 1988 .

[17]  Bonnie Brinton Anderson,et al.  How users perceive and respond to security messages: a NeuroIS research agenda and empirical study , 2016, Eur. J. Inf. Syst..

[18]  M S Wogalter,et al.  Warning signal words: connoted strength and understandability by children, elders, and non-native English speakers. , 1995, Ergonomics.

[19]  M. Angela Sasse,et al.  Safe and sound: a safety-critical approach to security , 2001, NSPW '01.

[20]  Kathleen L. Mosier,et al.  Does automation bias decision-making? , 1999, Int. J. Hum. Comput. Stud..

[21]  Christopher D. Wickens,et al.  Complacency and Automation Bias in the Use of Imperfect Automation , 2015, Hum. Factors.

[22]  W Janssen,et al.  Seat-belt wearing and driving behavior: an instrumented-vehicle study. , 1994, Accident; analysis and prevention.

[23]  ChiassonSonia,et al.  Why phishing still works , 2015 .

[24]  Joachim Meyer,et al.  Measures of Reliance and Compliance in Aided Visual Scanning , 2014, Hum. Factors.

[25]  I. Walker,et al.  Wearing a Bicycle Helmet Can Increase Risk Taking and Sensation Seeking in Adults , 2016, Psychological science.

[26]  Michael S. Wogalter,et al.  Handbook of Warnings , 2006 .

[27]  Bonnie Brinton Anderson,et al.  From Warning to Wallpaper: Why the Brain Habituates to Security Warnings and What Can Be Done About It , 2016, J. Manag. Inf. Syst..

[28]  Ross Anderson,et al.  Reading this May Harm Your Computer: The Psychology of Malware Warnings , 2014 .

[29]  Ronald L. Boring,et al.  Fault Diagnosis with Multi-State Alarms in a Nuclear Power Control Simulator , 2012 .

[30]  Matthew Smith,et al.  Why eve and mallory (also) love webmasters: a study on the root causes of SSL misconfigurations , 2014, AsiaCCS.

[31]  Neville Stanton Human factors of alarm design , 1994 .

[32]  Marti A. Hearst,et al.  Why phishing works , 2006, CHI.

[33]  Joachim Meyer,et al.  The Triad of Risk-Related Behaviors (TriRB): A Three-Dimensional Model of Cyber Risk Taking , 2018, Hum. Factors.

[34]  Daniel N. Cassenti,et al.  Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users , 2018, Front. Psychol..

[35]  Christopher D. Wickens,et al.  On the Independence of Compliance and Reliance: Are Automation False Alarms Worse Than Misses? , 2007, Hum. Factors.

[36]  David N. Hogg,et al.  Development of a situation awareness measure to evaluate advanced alarm systems in nuclear power plant control rooms , 1995 .

[37]  D. M. Green,et al.  Signal detection theory and psychophysics , 1966 .

[38]  Sebastian Möller,et al.  Modeling the behavior of users who are confronted with security mechanisms , 2011, Comput. Secur..

[39]  Toshiyuki Inagaki,et al.  Design and evaluation of steering protection for avoiding collisions during a lane change , 2014, Ergonomics.

[40]  Ninghui Li,et al.  Use of Phishing Training to Improve Security Warning Compliance: Evidence from a Field Experiment , 2017, HotSoS.

[41]  Enrico W. Coiera,et al.  Automation bias and verification complexity: a systematic review , 2017, J. Am. Medical Informatics Assoc..

[42]  S. Peltzman The Effects of Automobile Safety Regulation , 1975, Journal of Political Economy.

[43]  Christopher D. Wickens,et al.  The benefits of imperfect diagnostic automation: a synthesis of the literature , 2007 .

[44]  Christopher D. Wickens,et al.  False Alerts in Air Traffic Control Conflict Alerting System: Is There a “Cry Wolf” Effect? , 2009, Hum. Factors.

[45]  Sunny Consolvo,et al.  Experimenting at scale with google chrome's SSL warning , 2014, CHI.

[46]  Abdul V. Roudsari,et al.  Automation bias: a systematic review of frequency, effect mediators, and mitigators , 2012, J. Am. Medical Informatics Assoc..

[47]  Cormac Herley,et al.  So long, and no thanks for the externalities: the rational rejection of security advice by users , 2009, NSPW '09.

[48]  Joachim Meyer,et al.  Trust, Reliance, and Compliance , 2013 .

[49]  Rahul Telang,et al.  The Effect of Piracy Website Blocking on Consumer Behavior , 2015, MIS Q..

[50]  Stephen Rice,et al.  Using System-Wide Trust Theory to Reveal the Contagion Effects of Automation False Alarms and Misses on Compliance and Reliance in a Simulated Aviation Task , 2013 .

[51]  Raja Parasuraman,et al.  Complacency and Bias in Human Use of Automation: An Attentional Integration , 2010, Hum. Factors.

[52]  Huiyang Li,et al.  Attempting to Automate Compliance to Aircraft Collision Avoidance Advisories , 2016, IEEE Transactions on Automation Science and Engineering.