Does higher security always result in better protection? An approach for mitigating the trade-off between usability and security

Security software protects computers against undesired programs, informing users about potential danger. Maximum protection should be achieved with high security levels, which, however, have the disadvantage of generating a high number of unnecessary warnings. A frequent interruption of workflow is perceived as annoying and may even decrease compliance with the security software. The objective of this study was to examine this trade-off between security and usability. While performing a computer game, participants were randomly attacked by a virus. Security software informed them about potential damage. The control group was forced to use a very high security level. The experimental group who was given the opportunity to select a security level chose for medium-high levels. Performance, behavioural and subjective data was collected. The analysis revealed significant differences between the two groups. The experimental group complied more with the system which led to a better performance due to less damage. Furthermore, their acceptance of and trust in the system were higher while their perceived workload was lower. These findings indicate that a reduction of security level might increase overall protection as users are more willing to follow advices given by the security software.

[1]  Sebastian Möller,et al.  Modeling the behavior of users who are confronted with security mechanisms , 2011, Comput. Secur..

[2]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[3]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.

[4]  Douglas A. Wiegmann,et al.  Automation Failures on Tasks Easily Performed by Operators Undermines Trust in Automated Aids , 2003 .

[5]  John A. Swets,et al.  System operator response to warnings of danger: A laboratory investigation of the effects of the predictive value of a warning on human response time. , 1995 .

[6]  J P Bliss,et al.  Human probability matching behaviour in response to alarms of varying reliability. , 1995, Ergonomics.

[7]  Joachim Meyer,et al.  Conceptual Issues in the Study of Dynamic Hazard Warnings , 2004, Hum. Factors.

[8]  J. G. Hollands,et al.  Engineering Psychology and Human Performance , 1984 .

[9]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[10]  Sebastian Möller,et al.  An Experimental System for Studying the Tradeoff between Usability and Security , 2009, 2009 International Conference on Availability, Reliability and Security.

[11]  Ivan Flechais,et al.  Usable Security: Why Do We Need It? How Do We Get It? , 2005 .

[12]  J A Swets,et al.  The science of choosing the right decision threshold in high-stakes diagnostics. , 1992, The American psychologist.

[13]  Monica N. Lees,et al.  The influence of distraction and driving context on driver response to imperfect collision warning systems , 2007, Ergonomics.

[14]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[15]  C. Wickens Engineering psychology and human performance, 2nd ed. , 1992 .

[16]  S. Breznitz Cry Wolf: The Psychology of False Alarms , 1984 .

[17]  Lorrie Faith Cranor,et al.  Security and Usability: Designing Secure Systems that People Can Use , 2005 .