On Comparing Testing Criteria for Logical Decisions

Various test case selection criteria have been proposed for quality testing of software. It is a common phenomenon that test sets satisfying different criteria have different sizes and fault-detecting ability. Moreover, test sets that satisfy a stronger criterion and detect more faults usually consist of more test cases. A question that often puzzles software testing professionals and researchers is: when a testing criterion C 1 helps to detect more faults than another criterion C 2, is it because C 1 specifically requires test cases that are more fault-sensitive than those for C 2, or is it essentially because C 1 requires more test cases than C 2? In this paper, we discuss several methods and approaches for investigating this question, and empirically compare several common coverage criteria for testing logical decisions, taking into consideration the different sizes of the test sets required by these criteria. Our results demonstrate evidently that the stronger criteria under study are more fault-sensitive than the weaker ones, not solely because the former require more test cases. More importantly, we have illustrated a general approach, which takes into account the size factor of the generated test sets, for demonstrating the superiority of a testing criterion over another.

[1]  Gregg Rothermel,et al.  Can fault‐exposure‐potential estimates improve the fault detection abilities of test suites? , 2002, Softw. Test. Verification Reliab..

[2]  Nancy G. Leveson,et al.  An empirical evaluation of the MC/DC coverage criterion on the HETE-2 satellite software , 2000, 19th DASC. 19th Digital Avionics Systems Conference. Proceedings (Cat. No.00CH37126).

[3]  Tatsuhiro Tsuchiya,et al.  Non-specification-based approaches to logic testing for software , 2002, Inf. Softw. Technol..

[4]  Jonathan P. Bowen,et al.  Experimental evaluation of the variation in effectiveness for DC, FPC and MC/DC test criteria , 2003, 2003 International Symposium on Empirical Software Engineering, 2003. ISESE 2003. Proceedings..

[5]  M. Morris Mano,et al.  Digital design (2nd ed.) , 1991 .

[6]  Roger S. Pressman,et al.  Software Engineering: A Practitioner's Approach , 1982 .

[7]  Man F. Lau,et al.  An extended fault class hierarchy for specification-based testing , 2005, TSEM.

[8]  John Joseph Chilenski,et al.  An Investigation of Three Forms of the Modified Condition Decision Coverage (MCDC) Criterion , 2001 .

[9]  Joanne M. Atlee,et al.  A logic-model semantics for SCR software requirements , 1996, ISSTA '96.

[10]  Yuen-Tak Yu,et al.  A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions , 2006, J. Syst. Softw..

[11]  Pascale Thévenod-Fosse,et al.  Software error analysis: a real case study involving real faults and mutations , 1996, ISSTA '96.

[12]  Steven P. Miller,et al.  Applicability of modified condition/decision coverage to software testing , 1994, Softw. Eng. J..

[13]  Jonathan P. Bowen,et al.  Tolerance of control-flow testing criteria , 2003, Proceedings 27th Annual International Computer Software and Applications Conference. COMPAC 2003.

[14]  Atul Gupta,et al.  An approach for experimentally evaluating effectiveness and efficiency of coverage criteria for software testing , 2008, International Journal on Software Tools for Technology Transfer.

[15]  Anas N. Al-Rabadi,et al.  A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions , 2004 .

[16]  Glenford J. Myers,et al.  Art of Software Testing , 1979 .

[17]  Tsong Yueh Chen,et al.  On the Relationship Between Partition and Random Testing , 1994, IEEE Trans. Software Eng..

[18]  Kuo-Chung Tai,et al.  Test generation for Boolean expressions , 1995, Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE'95.