Test suite generation with the Many Independent Objective (MIO) algorithm

Abstract Context: Automatically generating test suites is intrinsically a multi-objective problem, as any of the testing targets (e.g., statements to execute or mutants to kill) is an objective on its own. Test suite generation has peculiarities that are quite different from other more regular optimization problems. For example, given an existing test suite, one can add more tests to cover the remaining objectives. One would like the smallest number of small tests to cover as many objectives as possible, but that is a secondary goal compared to covering those targets in the first place. Furthermore, the amount of objectives in software testing can quickly become unmanageable, in the order of (tens/hundreds of) thousands, especially for system testing of industrial size systems. Objective: To overcome these issues, different techniques have been proposed, like for example the Whole Test Suite (WTS) approach and the Many-Objective Sorting Algorithm (MOSA). However, those techniques might not scale well to very large numbers of objectives and limited search budgets (a typical case in system testing). In this paper, we propose a novel algorithm, called Many Independent Objective (MIO) algorithm. This algorithm is designed and tailored based on the specific properties of test suite generation. Method: An empirical study was carried out for test suite generation on a series of artificial examples and seven RESTful API web services. The EvoMaster system test generation tool was used, where MIO, MOSA, WTS and random search were compared. Results: The presented MIO algorithm resulted having the best overall performance, but was not the best on all problems. Conclusion: The novel presented MIO algorithm is a step forward in the automation of test suite generation for system testing. However, there are still properties of system testing that can be exploited to achieve even better results.

[1]  Andrea Arcuri,et al.  RESTful API Automated Test Case Generation , 2017, 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS).

[2]  Yuanyuan Zhang,et al.  Search-based software engineering: Trends, techniques and applications , 2012, CSUR.

[3]  Gordon Fraser,et al.  Achieving scalable mutation-based generation of whole test suites , 2015, Empirical Software Engineering.

[4]  Lionel C. Briand,et al.  Generating Test Data from OCL Constraints with Search Techniques , 2013, IEEE Transactions on Software Engineering.

[5]  Lionel C. Briand,et al.  A Systematic Review of the Application and Empirical Investigation of Search-Based Test Case Generation , 2010, IEEE Transactions on Software Engineering.

[6]  Mark Harman,et al.  A Theoretical and Empirical Study of Search-Based Testing: Local, Global, and Hybrid Search , 2010, IEEE Transactions on Software Engineering.

[7]  Xin Yao,et al.  Many-Objective Evolutionary Algorithms , 2015, ACM Comput. Surv..

[8]  Lionel C. Briand,et al.  A Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering , 2014, Softw. Test. Verification Reliab..

[9]  Lionel C. Briand,et al.  Random Testing: Theoretical Results and Practical Implications , 2012, IEEE Transactions on Software Engineering.

[10]  Gordon Fraser,et al.  A detailed investigation of the effectiveness of whole test suite generation , 2017, Empirical Software Engineering.

[11]  David H. Wolpert,et al.  No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..

[12]  Gordon Fraser,et al.  Whole Test Suite Generation , 2013, IEEE Transactions on Software Engineering.

[13]  Gordon Fraser,et al.  EvoSuite: automatic test suite generation for object-oriented software , 2011, ESEC/FSE '11.

[14]  Mohammad Alshraideh,et al.  Search‐based software test data generation for string data using program‐specific search operators , 2006, Softw. Test. Verification Reliab..

[15]  Gordon Fraser,et al.  Parameter tuning or default values? An empirical investigation in search-based software engineering , 2013, Empirical Software Engineering.

[16]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[17]  Andrea Arcuri,et al.  Many Independent Objective (MIO) Algorithm for Test Suite Generation , 2017, SSBSE.

[18]  Lionel C. Briand,et al.  Adaptive random testing: an illusion of effectiveness? , 2011, ISSTA '11.

[19]  GORDON FRASER,et al.  A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite , 2014, ACM Trans. Softw. Eng. Methodol..

[20]  Paolo Tonella,et al.  Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets , 2018, IEEE Transactions on Software Engineering.

[21]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..