A methodology for quantitative evaluation of software reliability using static analysis

This paper proposes a methodology for quantitative evaluation of software reliability in updated COTS or Open Source components. The model combines static analysis of existing source code modules, limited testing with execution path capture, and a series of Bayesian Belief Networks. Static analysis is used to detect faults within the source code which may lead to failure. Code coverage is used to determine which paths within the source code are executed as well as their execution rate. A series of Bayesian Belief Networks is then used to combine these parameters and estimate the reliability for each method. A second series of Bayesian Belief Networks then combines the module reliabilities to estimate the net software reliability. A proof of concept for the model is provided, as the model is applied to five different open-source applications and the results are compared with reliability estimates using the STREW (Software Testing and Early Warning) metrics. The model is shown to be highly effective and the results are within the confidence interval for the STREW reliability calculations, and typically the results differed by less than 2%. This model offers many benefits to practicing software engineers. Through the usage of this model, it is possible to quickly assess the reliability of a given release of a software module supplied by an external vendor to determine whether it is more or less reliable than a previous release. The determination can be made independent of any knowledge of the developer's software development process and without any development metrics.

[1]  Benjamin Livshits,et al.  Finding Security Vulnerabilities in Java Applications with Static Analysis , 2005, USENIX Security Symposium.

[2]  David Evans,et al.  Statically Detecting Likely Buffer Overflow Vulnerabilities , 2001, USENIX Security Symposium.

[3]  Jeffrey S. Foster,et al.  A comparison of bug finding tools for Java , 2004, 15th International Symposium on Software Reliability Engineering.

[4]  Abhishek Rai,et al.  On the Role of Static Analysis in Operating System Checking and Runtime Verification , 2005 .

[5]  Cyrille Artho Finding faults in multi-threaded programs , 2001 .

[6]  Dawson R. Engler,et al.  Static Analysis Versus Model Checking for Bug Finding , 2005, CONCUR.

[7]  Laurie A. Williams,et al.  On the value of static analysis for fault detection in software , 2006, IEEE Transactions on Software Engineering.

[8]  Nancy G. Leveson,et al.  An experimental evaluation of the assumption of independence in multiversion programming , 1986, IEEE Transactions on Software Engineering.

[9]  Laurie A. Williams,et al.  GERT: an empirical reliability estimation and testing feedback tool , 2004, 15th International Symposium on Software Reliability Engineering.

[10]  Mansoor Alam,et al.  Evaluating the Effectiveness of Java Static Analysis Tools , 2007, ESA.

[11]  Laurie A. Williams,et al.  Preliminary results on using static analysis tools for software inspection , 2004, 15th International Symposium on Software Reliability Engineering.

[12]  Mansoor Alam The Software Static Analysis Reliability Toolkit , 2006 .

[13]  Jason A. Osborne,et al.  Initial results of using in-process testing metrics to estimate software reliability , 2004 .

[14]  Nachiappan Nagappan,et al.  A software testing and reliability early warning (strew) metric suite , 2005 .

[15]  D.N. Kleidermacher Integrating Static Analysis into a Secure Software Development Process , 2008, 2008 IEEE Conference on Technologies for Homeland Security.

[16]  T. J. Ostrand,et al.  Using static analysis to determine where to focus dynamic testing effort , 2004 .

[17]  M. Alam,et al.  Work in progress - Measuring the ROItimefor Static Analysis , 2005, Proceedings Frontiers in Education 35th Annual Conference.