Random Test Case Generation and Manual Unit Testing: Substitute or Complement in Retrofitting Tests for Legacy Code?

Unit testing of legacy code is often characterized by the goal to find a maximum number of defects with minimal effort. In context of restrictive time frames and limited resources, approaches for generating test cases promise increased defect detection effectiveness. This paper presents the results of an empirical study investigating the effectiveness of (a) manual unit testing conducted by 48 master students within a time limit of 60 minutes and (b) tool-supported random test case generation with Randoop. Both approaches have been applied on a Java collection class library containing 35 seeded defects. With the specific settings, where time and resource restrictions limit the performance of manual unit testing, we found that (1) the number of defects detected by random test case generation is in the range of manual unit testing and, furthermore, (2) the randomly generated test cases detect different defects than manual unit testing. Therefore, random test case generation seems a useful aid to jump start manual unit testing of legacy code.

[1]  Bertrand Meyer,et al.  Reconciling Manual and Automated Testing: The AutoTest Experience , 2007, 2007 40th Annual Hawaii International Conference on System Sciences (HICSS'07).

[2]  Mark Allen Weiss,et al.  Data structures and problem solving using Java , 1997, SIGA.

[3]  Stefan Biffl,et al.  Improving Unfamiliar Code with Unit Tests: An Empirical Investigation on Tool-Supported and Human-Based Testing , 2012, PROFES.

[4]  Yannis Smaragdakis,et al.  JCrasher: an automatic robustness tester for Java , 2004, Softw. Pract. Exp..

[5]  Mark Weiss Data Structures and Problem Solving Using C , 1999 .

[6]  Nikolai Tillmann,et al.  Pex-White Box Test Generation for .NET , 2008, TAP.

[7]  Shuvendu K. Lahiri,et al.  Finding errors in .net with feedback-directed random testing , 2008, ISSTA '08.

[8]  Bertrand Meyer,et al.  Experimental assessment of random testing for object-oriented software , 2007, ISSTA '07.

[9]  Michael D. Ernst,et al.  Randoop: feedback-directed random testing for Java , 2007, OOPSLA '07.

[10]  Paolo Ciancarini,et al.  On the Effectiveness of Manual and Automatic Unit Test Generation , 2008, 2008 The Third International Conference on Software Engineering Advances.

[11]  Barry W. Boehm,et al.  Value-based software engineering: reinventing , 2003, SOEN.

[12]  Kent L. Beck,et al.  Test-driven Development - by example , 2002, The Addison-Wesley signature series.

[13]  R. Hamlet RANDOM TESTING , 1994 .

[14]  Michael C. Feathers Working Effectively with Legacy Code , 2004, XP/Agile Universe.

[15]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[16]  Gordon Fraser,et al.  Testing Container Classes: Random or Systematic? , 2011, FASE.

[17]  Catherine Oriat,et al.  Jartege: A Tool for Random Generation of Unit Tests for Java Classes , 2004, QoSA/SOQUA.

[18]  Rudolf Ramler,et al.  Issues in testing collection class libraries , 2010 .

[19]  Lionel C. Briand,et al.  Is mutation an appropriate tool for testing experiments? , 2005, ICSE.

[20]  Steve Freeman,et al.  Retrofitting unit tests , 2002 .

[21]  Dick Hamlet When only random testing will do , 2006, RT '06.

[22]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .