A study of effective regression testing in practice

The purpose of regression testing is to ensure that changes made to software, such as adding new features or modifying existing features, have not adversely affected features of the software that should not change. Regression testing is usually performed by running some, or all, of the test cases created to test modifications in previous versions of the software. Many techniques have been reported on how to select regression tests so that the number of test cases does not grow too large as the software evolves. Our proposed hybrid technique combines modification, minimization and prioritization-based selection using a list of source code changes and the execution traces from test cases run on previous versions. This technique seeks to identify a representative subset of all test cases that may result in different output behavior on the new software version. We report our experience with a tool called ATAC (Automatic Testing Analysis tool in C) which implements this technique.

[1]  D. R. Fulkerson,et al.  Two computationally difficult set covering problems that arise in computing the 1-width of incidence matrices of Steiner triple systems , 1974 .

[2]  David S. Johnson,et al.  Computers and Intractability: A Guide to the Theory of NP-Completeness , 1978 .

[3]  Yu.A. Zuev A set-covering problem: the combinatorial-local approach and the branch and bound method , 1979 .

[4]  Vasek Chvátal,et al.  A Greedy Heuristic for the Set-Covering Problem , 1979, Math. Oper. Res..

[5]  Elaine J. Weyuker,et al.  Selecting Software Test Data Using Data Flow Information , 1985, IEEE Transactions on Software Engineering.

[6]  Stephen M. Thebaut,et al.  An approach to software fault localization and revalidation based on incremental data flow analysis , 1989, [1989] Proceedings of the Thirteenth Annual International Computer Software & Applications Conference.

[7]  Jean Zoren Werner Hartmann,et al.  Techniques for selective revalidation , 1990, IEEE Software.

[8]  Rajiv Gupta,et al.  A methodology for controlling the size of a test suite , 1990, Proceedings. Conference on Software Maintenance 1990.

[9]  Joseph Robert Horgan,et al.  Data flow coverage and the C language , 1991, TAV4.

[10]  J. Laski,et al.  Identification of program modifications and its applications in software maintenance , 1992, Proceedings Conference on Software Maintenance 1992.

[11]  J. R. Horgan,et al.  A data flow coverage testing tool for C , 1992, [1992] Proceedings of the Second Symposium on Assessment of Quality Software Development Tools.

[12]  Rajiv Gupta,et al.  An approach to regression testing using slicing , 1992, Proceedings Conference on Software Maintenance 1992.

[13]  Joseph Robert Horgan,et al.  Incremental regression testing , 1993, 1993 Conference on Software Maintenance.

[14]  Lee J. White,et al.  Test Manager: A regression testing tool , 1993, 1993 Conference on Software Maintenance.

[15]  John D. Musa,et al.  Operational profiles in software-reliability engineering , 1993, IEEE Software.

[16]  David S. Rosenblum,et al.  TESTTUBE: a system for selective regression testing , 1994, Proceedings of 16th International Conference on Software Engineering.

[17]  Joseph Robert Horgan,et al.  Effect of test set size and block coverage on the fault detection effectiveness , 1994, Proceedings of 1994 IEEE International Symposium on Software Reliability Engineering.

[18]  Gregg Rothermel,et al.  Effective regression testing using safe test selection techniques , 1995 .

[19]  W. Eric Wong,et al.  Effect of test set minimization on fault detection effectiveness , 1998 .

[20]  Joseph Robert Horgan,et al.  Test set size minimization and fault detection effectiveness: A case study in a space application , 1999, J. Syst. Softw..