Provable Improvements on Branch Testing

This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C/sub 1/ properly covers C/sub 2/, then C/sub 1/ is guaranteed to be better at detecting faults than C/sub 2/, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C/sub 1/ is at least as likely to detect a fault as a test suite similarly selected using C/sub 2/. In contrast, if C/sub 1/ subsumes but does not properly cover C/sub 2/, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria. >

[1]  Bo Yang,et al.  A Structural Test Selection Criterion , 1988, Inf. Process. Lett..

[2]  A. Jefferson Offutt,et al.  Experimental results from an automatic test case generator , 1993, TSEM.

[3]  Elaine J. Weyuker,et al.  Selecting Software Test Data Using Data Flow Information , 1985, IEEE Transactions on Software Engineering.

[4]  Richard G. Hamlet,et al.  Theoretical comparison of testing methods , 1989, TAV3.

[5]  Richard G. Hamlet,et al.  Data Abstraction, Implementation, Specification, and Testing , 1981, TOPL.

[6]  Simeon C. Ntafos,et al.  An Evaluation of Random Testing , 1984, IEEE Transactions on Software Engineering.

[7]  Janusz W. Laski,et al.  A Data Flow Oriented Program Testing Strategy , 1983, IEEE Transactions on Software Engineering.

[8]  Mark Weiser,et al.  Comparison of Structural Test Coverage Metrics , 1985, IEEE Software.

[9]  John S. Gourlay A Mathematical Framework for the Investigation of Testing , 1983, IEEE Transactions on Software Engineering.

[10]  Elaine J. Weyuker,et al.  Analyzing Partition Testing Strategies , 1991, IEEE Trans. Software Eng..

[11]  D FosdickLloyd,et al.  Data Flow Analysis in Software Reliability , 1976 .

[12]  P. Frankl,et al.  Assessing the fault-detecting ability of testing methods , 1991, SIGSOFT '91.

[13]  Glenford J. Myers,et al.  Art of Software Testing , 1979 .

[14]  Simeon C. Ntafos,et al.  On Required Element Testing , 1984, IEEE Transactions on Software Engineering.

[15]  Elaine J. Weyuker,et al.  Data flow analysis techniques for test data selection , 2015, ICSE '82.

[16]  Elaine J. Weyuker,et al.  A Formal Analysis of the Fault-Detecting Ability of Testing Methods , 1993, IEEE Trans. Software Eng..

[17]  Elaine J. Weyuker,et al.  An Applicable Family of Data Flow Testing Criteria , 1988, IEEE Trans. Software Eng..

[18]  Richard G. Hamlet,et al.  Partition Testing Does Not Inspire Confidence , 1990, IEEE Trans. Software Eng..

[19]  Richard J. Lipton,et al.  Hints on Test Data Selection: Help for the Practicing Programmer , 1978, Computer.

[20]  Lori A. Clarke,et al.  A Formal Evaluation of Data Flow Path Selection Criteria , 1989, IEEE Trans. Software Eng..

[21]  Leon J. Osterweil,et al.  Data Flow Analysis in Software Reliability , 1976, CSUR.

[22]  S. N. Weiss Comparing test data adequacy criteria , 1989, SOEN.

[23]  Elaine J. Weyuker,et al.  Comparison of program testing strategies , 1991, TAV4.