An initial study on the use of execution complexity metrics as indicators of software vulnerabilities

Allocating code inspection and testing resources to the most problematic code areas is important to reduce development time and cost. While complexity metrics collected statically from software artifacts are known to be helpful in finding vulnerable code locations, some complex code is rarely executed in practice and has less chance of its vulnerabilities being detected. To augment the use of static complexity metrics, this study examines execution complexity metrics that are collected during code execution as indicators of vulnerable code locations. We conducted case studies on two large size, widely-used open source projects, the Mozilla Firefox web browser and the Wireshark network protocol analyzer. Our results indicate that execution complexity metrics are better indicators of vulnerable code locations than the most commonly-used static complexity metric, lines of source code. The ability of execution complexity metrics to discriminate vulnerable code locations from neutral code locations and to predict vulnerable code locations vary depending on projects. However, the vulnerability prediction models using execution complexity metrics are superior to the models using static complexity metrics in reducing inspection effort.

[1]  Gary McGraw,et al.  Software Security: Building Security In , 2006, 2006 17th International Symposium on Software Reliability Engineering.

[2]  Tim Menzies,et al.  Data Mining Static Code Attributes to Learn Defect Predictors , 2007, IEEE Transactions on Software Engineering.

[3]  Taghi M. Khoshgoftaar,et al.  Dynamic system complexity , 1993, [1993] Proceedings First International Software Metrics Symposium.

[4]  John D. Musa,et al.  Operational profiles in software-reliability engineering , 1993, IEEE Software.

[5]  Andreas Zeller,et al.  Mining metrics to predict component failures , 2006, ICSE.

[6]  Brian W. Cashell The Economic Impact of Cyber-Attacks , 2004 .

[7]  Ian H. Witten,et al.  Data mining - practical machine learning tools and techniques, Second Edition , 2005, The Morgan Kaufmann series in data management systems.

[8]  Taghi M. Khoshgoftaar,et al.  Using product, process, and execution metrics to predict fault-prone software modules with classification trees , 2000, Proceedings. Fifth IEEE International Symposium on High Assurance Systems Engineering (HASE 2000).

[9]  Ian Witten,et al.  Data Mining , 2000 .

[10]  Bill Curtis,et al.  Measuring the Psychological Complexity of Software Maintenance Tasks with the Halstead and McCabe Metrics , 1979, IEEE Transactions on Software Engineering.

[11]  Nachiappan Nagappan,et al.  Predicting defects using network analysis on dependency graphs , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[12]  Laurie A. Williams,et al.  Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities , 2011, IEEE Transactions on Software Engineering.

[13]  Laurie A. Williams,et al.  Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista , 2010, 2010 Third International Conference on Software Testing, Verification and Validation.

[14]  Anas N. Al-Rabadi,et al.  A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions , 2004 .

[15]  Victor R. Basili,et al.  A Validation of Object-Oriented Design Metrics as Quality Indicators , 1996, IEEE Trans. Software Eng..

[16]  Katerina Goseva-Popstojanova IEEE Transactions on Software Engineering seeks original manuscripts for a Special Issue on Evaluation and Improvement of Software Dependability scheduled to appear early in 2010. , 2010 .

[17]  Mladen A. Vouk,et al.  Investigating complexity metrics as indicators of software vulnerability , 2011 .