Detecting, isolating, and enforcing dependencies among and within test cases

Testing stateful applications is challenging, as it can be difficult to identify hidden dependencies on program state. These dependencies may manifest between several test cases, or simply within a single test case. When it's left to developers to document, understand, and respond to these dependencies, a mistake can result in unexpected and invalid test results. Although current testing infrastructure does not currently leverage state dependency information, we argue that it could, and that by doing so testing can be improved. Our results thus far show that by recovering dependencies between test cases and modifying the popular testing framework, JUnit, to utilize this information, we can optimize the testing process, reducing time needed to run tests by 62% on average. Our ongoing work is to apply similar analyses to improve existing state of the art test suite prioritization techniques and state of the art test case generation techniques.

[1]  Kivanç Muslu,et al.  Finding bugs by isolating unit tests , 2011, ESEC/FSE '11.

[2]  Nikolai Tillmann,et al.  MSeqGen: object-oriented unit-test generation via mining source code , 2009, ESEC/SIGSOFT FSE.

[3]  Gail E. Kaiser,et al.  Phosphor: Illuminating Dynamic Data Flow in the JVM , 2014 .

[4]  Alessandra Gorla,et al.  Search-based data-flow test generation , 2013, 2013 IEEE 24th International Symposium on Software Reliability Engineering (ISSRE).

[5]  Alessandro Orso,et al.  Automated Testing of Classes , 2000, ISSTA '00.

[6]  Tao Xie,et al.  Improving Structural Testing of Object-Oriented Programs via Integrating Evolutionary Testing and Symbolic Execution , 2008, 2008 23rd IEEE/ACM International Conference on Automated Software Engineering.

[7]  Yannis Smaragdakis,et al.  JCrasher: an automatic robustness tester for Java , 2004, Softw. Pract. Exp..

[8]  Paolo Tonella,et al.  Evolutionary testing of classes , 2004, ISSTA '04.

[9]  Michael D. Ernst,et al.  Empirically revisiting the test independence assumption , 2014, ISSTA 2014.

[10]  Michael D. Ernst,et al.  Feedback-Directed Random Test Generation , 2007, 29th International Conference on Software Engineering (ICSE'07).

[11]  Carl K. Chang,et al.  OCAT: object capture-based automated testing , 2010, ISSTA '10.

[12]  Gail E. Kaiser,et al.  VMVM: unit test virtualization for Java , 2014, ICSE Companion.

[13]  Gail E. Kaiser,et al.  A large-scale, longitudinal study of user profiles in world of warcraft , 2013, WWW.

[14]  Gail E. Kaiser,et al.  Phosphor: illuminating dynamic data flow in commodity jvms , 2014, OOPSLA.

[15]  Jonathan Bell,et al.  HALO (highly addictive, socially optimized) software engineering , 2011, GAS '11.

[16]  Gail E. Kaiser,et al.  Chronicler: Lightweight recording to reproduce field failures , 2013, 2013 35th International Conference on Software Engineering (ICSE).

[17]  Jonathan Bell,et al.  Secret ninja testing with HALO software engineering , 2011, SSE '11.

[18]  RothermelGregg,et al.  Performing data flow testing on classes , 1994 .

[19]  Gregg Rothermel,et al.  Analyzing Regression Test Selection Techniques , 1996, IEEE Trans. Software Eng..

[20]  Michael D. Ernst,et al.  Combined static and dynamic automated test generation , 2011, ISSTA '11.

[21]  Gail E. Kaiser,et al.  Unit test virtualization with VMVM , 2014, ICSE.

[22]  Mark Harman,et al.  Regression testing minimization, selection and prioritization: a survey , 2012, Softw. Test. Verification Reliab..

[23]  Gail E. Kaiser,et al.  VMVM: Unit Test Virtualization for Java (System Implementation) , 2014 .