Some Vulnerabilities Are Different Than Others - Studying Vulnerabilities and Attack Surfaces in the Wild

The security of deployed and actively used systems is a moving target, influenced by factors not captured in the existing security metrics. For example, the count and severity of vulnerabilities in source code, as well as the corresponding attack surface, are commonly used as measures of a software product’s security. But these measures do not provide a full picture. For instance, some vulnerabilities are never exploited in the wild, partly due to security technologies that make exploiting them difficult. As for attack surface, its effectiveness has not been validated empirically in the deployment environment. We introduce several security metrics derived from field data that help to complete the picture. They include the count of vulnerabilities exploited and the size of the attack surface actually exercised in real-world attacks. By evaluating these metrics on nearly 300 million reports of intrusion-protection telemetry, collected on more than six million hosts, we conduct an empirical study of security in the deployment environment. We find that none of the products in our study have more than 35% of their disclosed vulnerabilities exploited in the wild. Furthermore, the exploitation ratio and the exercised attack surface tend to decrease with newer product releases. We also find that hosts that quickly upgrade to newer product versions tend to have reduced exercised attack-surfaces. The metrics proposed enable a more complete assessment of the security posture of enterprise infrastructure. Additionally, they open up new research directions for improving security by focusing on the vulnerabilities and attacks that have the highest impact in practice.

[1]  Jeannette M. Wing,et al.  An Attack Surface Metric , 2011, IEEE Transactions on Software Engineering.

[2]  Engin Kirda,et al.  Proceedings of the First Workshop on Building Analysis Datasets and Gathering Experience Returns for Security , 2011, Eurosys 2011.

[3]  Mehran Bozorgi,et al.  Beyond heuristics: learning to classify vulnerabilities and predict exploits , 2010, KDD.

[4]  Tudor Dumitras,et al.  Toward a standard benchmark for computer security research: the worldwide intelligence network environment (WINE) , 2011, BADGERS '11.

[5]  尚弘 島影 National Institute of Standards and Technologyにおける超伝導研究及び生活 , 2001 .

[6]  Sam Ransbotham,et al.  An Empirical Analysis of Exploitation Attempts Based on Vulnerabilities in Open Source Software , 2010, WEIS.

[7]  Laurie A. Williams,et al.  Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities , 2011, IEEE Transactions on Software Engineering.

[8]  Stuart E. Schechter,et al.  Milk or Wine: Does Software Security Improve with Age? , 2006, USENIX Security Symposium.

[9]  Eric Rescorla,et al.  Is finding security holes a good idea? , 2005, IEEE Security & Privacy.

[10]  Karen A. Scarfone,et al.  Guide to Adopting and Using the Security Content Automation Protocol (SCAP) Version 1.0 , 2010 .

[11]  Sandy Clark,et al.  Familiarity breeds contempt: the honeymoon effect and the role of legacy code in zero-day vulnerabilities , 2010, ACSAC '10.

[12]  Michael Howard,et al.  Measuring Relative Attack Surfaces , 2005 .

[13]  Wolfgang Schröder-Preikschat,et al.  Attack Surface Metrics and Automated Compile-Time OS Kernel Tailoring , 2013, NDSS.

[14]  Luca Allodi Attacker Economics for Internet-scale Vulnerability Risk Assessment , 2013, LEET.

[15]  Fabio Massacci,et al.  A preliminary analysis of vulnerability scores for attacks in wild: the ekits and sym datasets , 2012, BADGERS@CCS.

[16]  Leyla Bilge,et al.  Before we knew it: an empirical study of zero-day attacks in the real world , 2012, CCS.

[17]  Laurie A. Williams,et al.  Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista , 2010, 2010 Third International Conference on Software Testing, Verification and Validation.