ADFD+: An Automatic Testing Technique for Finding and Presenting Failure Domains

 Abstract—This paper presents Automated Discovery of Failure Domain+ (ADFD+), an upgraded version of ADFD technique with respect to algorithm and graphical presentation of failure domains. The new algorithm used in ADFD+ searches for failure domain around the failure in a given radius as against ADFD which limits the search between lower and upper bounds. This results in consumption of lower number of test cases for detecting failure domain. The output has been improved in ADFD+ to provide labeled graphs for depicting the results in easily understandable user-friendly form. ADFD+ is compared with Randoop to find the comparative performance of the two techniques. The results indicate that ADFD+ is a promising technique for finding failure and failure domain efficiently and effectively. In comparison with Randoop, its efficiency is evident by taking two orders of magnitude less time and its effectiveness is shown by taking 50% or less number of test cases to discover failure domains. ADFD+ has the added advantage of presenting the output in graphical form showing point, block and strip domains visually as against Randoop, which lacks graphical user interface.

[1]  Bin Wang,et al.  Automated support for classifying software failure reports , 2003, 25th International Conference on Software Engineering, 2003. Proceedings..

[2]  Johannes Mayer,et al.  Towards the determination of typical failure patterns , 2007, SOQUA '07.

[3]  Yannis Smaragdakis,et al.  JCrasher: an automatic robustness tester for Java , 2004, Softw. Pract. Exp..

[4]  Michael D. Ernst,et al.  Randoop: feedback-directed random testing for Java , 2007, OOPSLA '07.

[5]  David Notkin,et al.  Symstra: A Framework for Generating Object-Oriented Unit Tests Using Symbolic Execution , 2005, TACAS.

[6]  Joseph Robert Horgan,et al.  Fault localization using execution slices and dataflow tests , 1995, Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE'95.

[7]  Manuel Oriol Random Testing: Evaluation of a Law Describing the Number of Faults Found , 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation.

[8]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[9]  John T. Stasko,et al.  Visualization of test information to assist fault localization , 2002, ICSE '02.

[10]  I. K. Mak,et al.  Adaptive Random Testing , 2004, ASIAN.

[11]  Tsong Yueh Chen,et al.  Proportional sampling strategy: guidelines for software testing practitioners , 1996, Inf. Softw. Technol..

[12]  Michael D. Ernst,et al.  Eclat: Automatic Generation and Classification of Test Inputs , 2005, ECOOP.

[13]  Catherine Oriat,et al.  Jartege: A Tool for Random Generation of Unit Tests for Java Classes , 2004, QoSA/SOQUA.

[14]  G. B. Finelli,et al.  NASA Software failure characterization experiments , 1991 .

[15]  Manuel Oriol,et al.  Automated Discovery of Failure Domain , 2013 .