Seeding strategies in search‐based unit test generation

Search‐based techniques have been applied successfully to the task of generating unit tests for object‐oriented software. However, as for any meta‐heuristic search, the efficiency heavily depends on many factors; seeding, which refers to the use of previous related knowledge to help solve the testing problem at hand, is one such factor that may strongly influence this efficiency. This paper investigates different seeding strategies for unit test generation, in particular seeding of numerical and string constants derived statically and dynamically, seeding of type information and seeding of previously generated tests. To understand the effects of these seeding strategies, the results of a large empirical analysis carried out on a large collection of open‐source projects from the SF110 corpus and the Apache Commons repository are reported. These experiments show with strong statistical confidence that, even for a testing tool already able to achieve high coverage, the use of appropriate seeding strategies can further improve performance. © 2016 The Authors. Software Testing, Verification and Reliability Published by John Wiley & Sons Ltd.

[1]  Phil McMinn,et al.  Search-Based Test Input Generation for String Data Types Using the Results of Web Queries , 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation.

[2]  Gordon Fraser,et al.  Automated Test Generation for Java Generics , 2014, SWQD.

[3]  Gordon Fraser,et al.  Whole Test Suite Generation , 2013, IEEE Transactions on Software Engineering.

[4]  Gordon Fraser,et al.  EvoSuite: automatic test suite generation for object-oriented software , 2011, ESEC/FSE '11.

[5]  Mark Harman,et al.  Test data regeneration: generating new test data from existing test data , 2012, Softw. Test. Verification Reliab..

[6]  Mohammad Alshraideh,et al.  Search‐based software test data generation for string data using program‐specific search operators , 2006, Softw. Test. Verification Reliab..

[7]  Gordon Fraser,et al.  Parameter tuning or default values? An empirical investigation in search-based software engineering , 2013, Empirical Software Engineering.

[8]  Günther Ruhe,et al.  Search Based Software Engineering , 2013, Lecture Notes in Computer Science.

[9]  John Levine,et al.  Investigation of Different Seeding Strategies in a Genetic Planner , 2001, EvoWorkshops.

[10]  Lionel C. Briand,et al.  A Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering , 2014, Softw. Test. Verification Reliab..

[11]  Luciano Baresi,et al.  Improving evolutionary testing by means of efficiency enhancement techniques , 2010, IEEE Congress on Evolutionary Computation.

[12]  William B. Langdon,et al.  Seeding Genetic Programming Populations , 2000, EuroGP.

[13]  Gordon Fraser,et al.  Automated unit test generation for classes with environment dependencies , 2014, ASE.

[14]  Joachim Wegener,et al.  Evolutionary test environment for automatic structural testing , 2001, Inf. Softw. Technol..

[15]  Gordon Fraser,et al.  Bytecode Testability Transformation , 2011, SSBSE.

[16]  Paolo Tonella,et al.  Evolutionary testing of classes , 2004, ISSTA '04.

[17]  Bogdan Korel,et al.  Automated Software Test Data Generation , 1990, IEEE Trans. Software Eng..

[18]  Mark Harman,et al.  Automated web application testing using search based software engineering , 2011, 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011).

[19]  John A. Clark,et al.  Evolutionary Improvement of Programs , 2011, IEEE Transactions on Evolutionary Computation.

[20]  Harmen Sthamer,et al.  Improving evolutionary real-time testing , 2006, GECCO.

[21]  Mark Harman,et al.  Reducing qualitative human oracle costs associated with automatically generated test data , 2010, STOV '10.

[22]  René Just,et al.  The major mutation framework: efficient and scalable mutation analysis for Java , 2014, ISSTA 2014.

[23]  Gordon Fraser,et al.  The Seed is Strong: Seeding Strategies in Search-Based Software Testing , 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation.

[24]  GORDON FRASER,et al.  A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite , 2014, ACM Trans. Softw. Eng. Methodol..

[25]  Andreas Zeller,et al.  Exploiting Common Object Usage in Test Case Generation , 2011, 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation.

[26]  Reid Holmes,et al.  Coverage is not strongly correlated with test suite effectiveness , 2014, ICSE.

[27]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..

[28]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).