Automated unit test generation for evolving software

As developers make changes to software programs, they want to ensure that the originally intended functionality of the software has not been affected. As a result, developers write tests and execute them after making changes. However, high quality tests are needed that can reveal unintended bugs, and not all developers have access to such tests. Moreover, since tests are written without the knowledge of future changes, sometimes new tests are needed to exercise such changes. While this problem has been well studied in the literature, the current approaches for automatically generating such tests either only attempt to reach the change and do not aim to propagate the infected state to the output, or may suffer from scalability issues, especially when a large sequence of calls is required for propagation. We propose a search-based approach that aims to automatically generate tests which can reveal functionality changes, given two versions of a program (e.g., pre-change and post-change). Developers can then use these tests to identify unintended functionality changes (i.e., bugs). Initial evaluation results show that our approach can be effective on detecting such changes, but there remain challenges in scaling up test generation and making the tests useful to developers, both of which we aim to overcome.

[1]  Michael D. Ernst,et al.  Defects4J: a database of existing faults to enable controlled testing studies for Java programs , 2014, ISSTA 2014.

[2]  Alessandro Orso,et al.  Search-Based Propagation of Regression Faults in Automated Regression Testing , 2013, 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops.

[3]  Mark Harman,et al.  Regression testing minimization, selection and prioritization: a survey , 2012, Softw. Test. Verification Reliab..

[4]  Tao Xie,et al.  DiffGen: Automated Regression Unit-Test Generation , 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering.

[5]  Bruno C. d. S. Oliveira,et al.  Partition-based regression verification , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[6]  Alessandro Orso,et al.  Test-Suite Augmentation for Evolving Software , 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering.

[7]  Zhihong Xu,et al.  Directed test suite augmentation , 2011, 2011 33rd International Conference on Software Engineering (ICSE).

[8]  Myra B. Cohen,et al.  Factors affecting the use of genetic algorithms in test suite augmentation , 2010, GECCO '10.

[9]  Bertrand Meyer,et al.  Object distance and its application to adaptive random testing of object-oriented programs , 2006, RT '06.

[10]  Nikolai Tillmann,et al.  eXpress: guided path exploration for efficient regression test generation , 2011, ISSTA '11.

[11]  Gordon Fraser,et al.  Random or Genetic Algorithm Search for Object-Oriented Test Suite Generation? , 2015, GECCO.

[12]  Koushik Sen,et al.  Symbolic execution for software testing: three decades later , 2013, CACM.

[13]  Alessandro Orso,et al.  MATRIX: Maintenance-Oriented Testing Requirements Identifier and Examiner , 2006, Testing: Academic & Industrial Conference - Practice And Research Techniques (TAIC PART'06).

[14]  Alessandro Orso,et al.  BERT: BEhavioral Regression Testing , 2008, WODA '08.

[15]  Jeffrey M. Voas,et al.  PIE: A Dynamic Failure-Based Technique , 1992, IEEE Trans. Software Eng..

[16]  Gordon Fraser,et al.  Whole Test Suite Generation , 2013, IEEE Transactions on Software Engineering.

[17]  Phil McMinn,et al.  Search‐based software test data generation: a survey , 2004, Softw. Test. Verification Reliab..

[18]  Alessandro Orso,et al.  Automated Behavioral Regression Testing , 2010, 2010 Third International Conference on Software Testing, Verification and Validation.