A new software testing approach based on domain analysis of specifications and programs

Partition testing is a well-known software testing technique. This paper shows that partition testing strategies are relatively ineffective in detecting faults related to small shifts in input domain boundary. We present an innovative software testing approach based on input domain analysis of specifications and programs, and propose the principle and procedure of boundary test case selection in functional domain and operational domain. The differences of the two domains are examined by analyzing the set of their boundary test cases. To automatically determine the operational domain of a program, the ADSOD system is prototyped. The system supports not only the determination of input domain of integer and real data types, but also non-numeric data types such as characters and enumerated types. It consists of several modules in finding illegal values of input variables with respect to specific expressions. We apply the new testing approach to some example studies. A preliminary evaluation on fault detection effectiveness and code coverage illustrates that the approach is highly effective in detecting faults due to small shifts in the input domain boundary, and is more economical in test case generation than the partition testing strategies.

[1]  Frank Tip,et al.  A survey of program slicing techniques , 1994, J. Program. Lang..

[2]  Stuart Reid,et al.  An empirical analysis of equivalence partitioning, boundary value analysis and random testing , 1997, Proceedings Fourth International Software Metrics Symposium.

[3]  Michael R. Lyu,et al.  Handbook of software reliability engineering , 1996 .

[4]  Michael R. Lyu,et al.  A coverage analysis tool for the effectiveness of software testing , 1994 .

[5]  Paul C. Jorgensen,et al.  Software Testing: A Craftsman's Approach , 1995 .

[6]  Walter J. Gutjahr,et al.  Partition Testing vs. Random Testing: The Influence of Uncertainty , 1999, IEEE Trans. Software Eng..

[7]  Bogdan Korel,et al.  Automated Software Test Data Generation , 1990, IEEE Trans. Software Eng..

[8]  Tatsuhiro Tsuchiya,et al.  On fault classes and error detection capability of specification-based testing , 2002, TSEM.

[9]  Tibor Gyimóthy,et al.  An efficient relevant slicing method for debugging , 1999, ESEC/FSE-7.

[10]  D. Richard Kuhn Fault classes and error detection capability of specification-based testing , 1999, TSEM.

[11]  Peter C. Maxwell,et al.  Comparing functional and structural tests , 2000, Proceedings International Test Conference 2000 (IEEE Cat. No.00CH37159).

[12]  M. R. Mercer,et al.  Code coverage, what does it mean in terms of quality? , 2001, Annual Reliability and Maintainability Symposium. 2001 Proceedings. International Symposium on Product Quality and Integrity (Cat. No.01CH37179).

[13]  Brian Marick,et al.  The craft of software testing , 1994 .

[14]  Elaine J. Weyuker,et al.  A simplified domain-testing strategy , 1994, TSEM.

[15]  Simeon C. Ntafos,et al.  A Comparison of Some Structural Testing Strategies , 1988, IEEE Trans. Software Eng..

[16]  Shaoying Liu,et al.  Generating test data from SOFL specifications , 1999, J. Syst. Softw..

[17]  Boris Beizer,et al.  Software Testing Techniques , 1983 .

[18]  Michael R. Lyu,et al.  Effect of code coverage on software reliability measurement , 2001, IEEE Trans. Reliab..