AN EMPIRICAL STUDY OF THE BRANCH COVERAGE OF DIFFERENT FAULT CLASSES

The question "How much testing is enough?" has led many to structural testing methods. Much has been written about their fault detecting ability, but how does this vary by the class of fault? This paper introduces the term "Affected Branch Coverage". An affected branch is a branch which had to be modified in order to fix a fault. Affected Branch Coverage describes the percentage of affected branches that had been exercised in testing. The study was done on a leading on-line transaction processing product, analyzing ninety eight field errors. The specific questions addressed are: - Which classes of faults are most commonly observed? - Which fault classes can be associated with covered code and which with uncovered code? - Is affected branch coverage related to the maturity of the software? Our results show that whether or not a fault would appear in covered code depends strongly on the fault class. While this was true in both newer and older code, it was more vivid in newer code. Overall, we found that affected branch coverage was slightly less than 50%, suggesting that increasing branch coverage would offer limited gains in fault detection.

[1]  William E. Howden,et al.  Weak Mutation Testing and Completeness of Test Sets , 1982, IEEE Transactions on Software Engineering.

[2]  Paul R. Ritter,et al.  Experience in testing the Motif interface , 1991, IEEE Software.

[3]  B GradyRobert Practical results from measuring software quality , 1993 .

[4]  Victor R. Basili,et al.  Software errors and complexity: an empirical investigation , 1993 .

[5]  Paul Piwowarski,et al.  Coverage measurement experience during function test , 1993, Proceedings of 1993 15th International Conference on Software Engineering.

[6]  Roger S. Pressman,et al.  Software Engineering: A Practitioner's Approach , 1982 .

[7]  Elaine J. Weyuker,et al.  Collecting and categorizing software error data in an industrial environment , 2018, J. Syst. Softw..

[8]  Robert B. Grady,et al.  Practical results from measuring software quality , 1993, CACM.

[9]  R BasiliVictor,et al.  Comparing the Effectiveness of Software Testing Strategies , 1987 .

[10]  David A. Patterson,et al.  Computer Architecture: A Quantitative Approach , 1969 .

[11]  David Gelperin,et al.  The growth of software testing , 1988, CACM.

[12]  Victor R. Basili,et al.  Analyzing the test process using structural coverage , 1985, ICSE '85.

[13]  Anas N. Al-Rabadi,et al.  A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions , 2004 .

[14]  Nancy G. Leveson,et al.  An Empirical Comparison of Software Fault Tolerance and Fault Elimination , 1991, IEEE Trans. Software Eng..

[15]  Mark Sullivan,et al.  A comparison of software defects in database management systems and operating systems , 1992, [1992] Digest of Papers. FTCS-22: The Twenty-Second International Symposium on Fault-Tolerant Computing.