Toward Fail Safety for Security Decisions

I was thinking about what I wanted to write about for my first editorial in IEEE Security & Privacy, when I came across this statement in the 2019 Application Security Risk Report by Micro Focus1 (free, but it requires registration): “Because people are prone to error, manual security tasks that can otherwise be automated are at risk of being done incorrectly.” While this is not a surprising statement in and of itself, it reminded me of the many ad hoc decisions being made by people in various roles in computing systems— from programmers to administrators to end users—that we require to be correct for a system to achieve security goals. I normally work and teach topics related to operating systems and software security, where we aim to build methods to detect and/or resolve security problems in current systems. As a result, our aim is to automate security decisions as much as is practical. However, with increasing frequency, we are running into cases where systems request manual security decisions, but these often complex and ad hoc decisions are being made by people with little guidance and essentially no safety net. While this may not be a surprise to people who work on usable security or social engineering, a question is this: How can researchers help systems avoid creating vulnerabilities resulting from such manual decisions?