Analysis of Software Testing Strategies Through Attained Failure Size

This paper discusses efficacy issues in software testing strategies through attained failure size. Failure size is the probability of finding an input that causes a failure in the input domain. As testing progresses, failure size decreases due to debugging. The failure size at the termination of testing is called the attained failure size. Using this measure, we compare the efficacies of partition testing and random testing, derive conditions that lead to the superiority of partition testing, and obtain optimal time allocations in partition testing. The core findings are presented in a decision tree to assist testers in test management.

[1]  Sandro Morasca,et al.  On the analytical comparison of testing techniques , 2004, ISSTA '04.

[2]  Kai-Yuan Cai,et al.  Partition testing with dynamic partitioning , 2005, 29th Annual International Computer Software and Applications Conference (COMPSAC'05).

[3]  Joseph G. Ecker,et al.  Introduction to Operations Research , 1988, The Mathematical Gazette.

[4]  Hang Lei,et al.  Evaluating the Effectiveness of Random and Partition Testing by Delivered Reliability , 2008, 2008 International Conference on Embedded Software and Systems.

[5]  Vijayan N. Nair,et al.  A statistical assessment of some software testing strategies and application of experimental design techniques , 1998 .

[6]  Simeon C. Ntafos,et al.  An Evaluation of Random Testing , 1984, IEEE Transactions on Software Engineering.

[7]  Walter J. Gutjahr,et al.  Partition Testing vs. Random Testing: The Influence of Uncertainty , 1999, IEEE Trans. Software Eng..

[8]  R. N. Rattihalli,et al.  Failure Size Proportional Models and an Analysis of Failure Detection Abilities of Software Testing Strategies , 2007, IEEE Transactions on Reliability.

[9]  Bojan Cukic,et al.  Comparing Partition and Random Testing via Majorization and Schur Functions , 2003, IEEE Trans. Software Eng..

[10]  Dick Hamlet When only random testing will do , 2006, RT '06.

[11]  Elaine J. Weyuker,et al.  Analyzing Partition Testing Strategies , 1991, IEEE Trans. Software Eng..

[12]  Tsong Yueh Chen,et al.  On the Expected Number of Failures Detected by Subdomain Testing and Random Testing , 1996, IEEE Trans. Software Eng..

[13]  I. Olkin,et al.  Inequalities: Theory of Majorization and Its Applications , 1980 .

[14]  Tsong Yueh Chen,et al.  On the Relationship Between Partition and Random Testing , 1994, IEEE Trans. Software Eng..

[15]  Simeon C. Ntafos,et al.  On Comparisons of Random, Partition, and Proportional Partition Testing , 2001, IEEE Trans. Software Eng..

[16]  Michael R. Lyu,et al.  Optimal testing resource allocation, and sensitivity analysis in software development , 2005, IEEE Transactions on Reliability.

[17]  Kazimierz Worwa A Comparison of Structural Testing Strategies Based on Subdomain Testing and Random Testing , 2009, 2009 Fourth International Conference on Dependability of Computer Systems.

[18]  Amrit L. Goel,et al.  Time-Dependent Error-Detection Rate Model for Software Reliability and Other Performance Measures , 1979, IEEE Transactions on Reliability.

[19]  Bev Littlewood,et al.  Evaluating Testing Methods by Delivered Reliability , 1998, IEEE Trans. Software Eng..

[20]  Elaine J. Weyuker,et al.  A Formal Analysis of the Fault-Detecting Ability of Testing Methods , 1993, IEEE Trans. Software Eng..

[21]  Kai-Yuan Cai,et al.  A Dynamic Partitioning Approach for GUI Testing , 2006, 30th Annual International Computer Software and Applications Conference (COMPSAC'06).

[22]  Chin-Yu Huang,et al.  Optimal resource allocation and sensitivity analysis for modular software testing , 2003, Fifth International Symposium on Multimedia Software Engineering, 2003. Proceedings..