Supporting Test Suite Evolution through Test Case Adaptation

Software systems evolve during development and maintenance, and many test cases designed for the early versions of the system become obsolete during the software lifecycle. Repairing test cases that do not compile due to changes in the code under test and generating new test cases to test the changed code is an expensive and time consuming activity that could benefit from automated approaches. In this paper we propose an approach for automatically repairing and generating test cases during software evolution. Differently from existing approaches to test case generation, our approach uses information available in existing test cases, defines a set of heuristics to repair test cases invalidated by changes in the software, and generate new test cases for evolved software. The results obtained with a prototype implementation of the technique show that the approach can effectively maintain evolving test suites, and perform well compared to competing approaches.

[1]  Atif M. Memon,et al.  Automatically repairing event sequence-based GUI test suites for regression testing , 2008, TSEM.

[2]  Darko Marinov,et al.  ReAssert: Suggesting Repairs for Broken Unit Tests , 2009, 2009 IEEE/ACM International Conference on Automated Software Engineering.

[3]  Alessandra Gorla,et al.  DaTeC: Contextual data flow testing of java classes , 2009, 2009 31st International Conference on Software Engineering - Companion Volume.

[4]  Michael D. Ernst,et al.  Randoop: feedback-directed random testing for Java , 2007, OOPSLA '07.

[5]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[6]  Koushik Sen,et al.  CUTE: a concolic unit testing engine for C , 2005, ESEC/FSE-13.

[7]  Michael R. Lowry,et al.  Combining unit-level symbolic execution and system-level concrete execution for testing nasa software , 2008, ISSTA '08.

[8]  Nikolai Tillmann,et al.  MSeqGen: object-oriented unit-test generation via mining source code , 2009, ESEC/SIGSOFT FSE.

[9]  Stas Negara,et al.  ReBA , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[10]  Mira Mezini,et al.  Mining framework usage changes from instantiation code , 2008, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[11]  Gregg Rothermel,et al.  Computation of interprocedural control dependence , 1998, ISSTA '98.

[12]  Darko Marinov,et al.  Reducing the Costs of Bounded-Exhaustive Testing , 2009, FASE.

[13]  Myra B. Cohen,et al.  Directed test suite augmentation: techniques and tradeoffs , 2010, FSE '10.

[14]  Martin P. Robillard,et al.  Recommending adaptive changes for framework evolution , 2011, 2008 ACM/IEEE 30th International Conference on Software Engineering.

[15]  Darko Marinov,et al.  On test repair using symbolic execution , 2010, ISSTA '10.

[16]  Michael D. Ernst,et al.  Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs , 2011, 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011).

[17]  Christus,et al.  A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins , 2022 .

[18]  Mauro Pezzè,et al.  Automatically repairing test cases for evolving method declarations , 2010, 2010 IEEE International Conference on Software Maintenance.

[19]  Andreas Hoffmann,et al.  Model-Based Testing , 2012, IEEE Software.

[20]  Zhenchang Xing,et al.  Refactoring Practice: How it is and How it Should be Supported - An Eclipse Case Study , 2006, 2006 22nd IEEE International Conference on Software Maintenance.

[21]  A. Orso,et al.  Retesting software during development and maintenance , 2008, 2008 Frontiers of Software Maintenance.

[22]  Gordon Fraser,et al.  EvoSuite: automatic test suite generation for object-oriented software , 2011, ESEC/FSE '11.

[23]  Matthew B. Dwyer,et al.  Differential symbolic execution , 2008, SIGSOFT '08/FSE-16.