Adaptive Regression Testing Strategy: An Empirical Study

When software systems evolve, different amounts and types of code modifications can be involved in different versions. These factors can affect the costs and benefits of regression testing techniques in different ways, and thus, there may be no single regression testing technique that is the most cost-effective technique to use on every version. To date, many regression testing techniques have been proposed, but no research has been done on the problem of helping practitioners systematically choose appropriate techniques on new versions as systems evolve. To address this problem, we propose adaptive regression testing (ART) strategies that attempt to identify the regression testing techniques that will be the most cost-effective for each regression testing session considering organization's situations and testing environment. To assess our approach, we conducted an experiment focusing on test case prioritization techniques. Our results show that prioritization techniques selected by our approach can be more cost-effective than those used by the control approaches.

[1]  Myra B. Cohen,et al.  Configuration-aware regression testing: an empirical study of sampling and prioritization , 2008, ISSTA '08.

[2]  Frank Tip,et al.  Chianti: a tool for change impact analysis of java programs , 2004, OOPSLA.

[3]  M. Bohanec,et al.  The Analytic Hierarchy Process , 2004 .

[4]  Lionel C. Briand,et al.  Is mutation an appropriate tool for testing experiments? , 2005, ICSE.

[5]  Alessandro Orso,et al.  Scaling regression testing to large software systems , 2004, SIGSOFT '04/FSE-12.

[6]  Amitabh Srivastava,et al.  Effectively prioritizing tests in development environment , 2002, ISSTA '02.

[7]  Gregg Rothermel,et al.  Empirical Studies of a Prediction Model for Regression Test Selection , 2001, IEEE Trans. Software Eng..

[8]  Mark Harman,et al.  Regression Testing Minimisation, Selection and Prioritisation - A Survey , 2009 .

[9]  T. Boucher,et al.  Multiattribute Evaluation within a Present Value Framework and its Relation to the Analytic Hierarchy Process , 1991 .

[10]  Gregg Rothermel,et al.  Using sensitivity analysis to create simplified economic models for regression testing , 2008, ISSTA '08.

[11]  Gregg Rothermel,et al.  Prioritizing test cases for regression testing , 2000, ISSTA '00.

[12]  Gregg Rothermel,et al.  A safe, efficient regression test selection technique , 1997, TSEM.

[13]  Gregg Rothermel,et al.  An empirical study of regression testing techniques incorporating context and lifetime factors and improved cost-benefit models , 2006, SIGSOFT '06/FSE-14.

[14]  Gregg Rothermel,et al.  On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques , 2006, IEEE Transactions on Software Engineering.

[15]  G. W. Evans,et al.  Layout design using the analytic hierarchy process , 1991 .

[16]  Joseph Robert Horgan,et al.  A study of effective regression testing in practice , 1997, Proceedings The Eighth International Symposium on Software Reliability Engineering.

[17]  Marc J. Balcer,et al.  The category-partition method for specifying and generating fuctional tests , 1988, CACM.

[18]  V. Lopes,et al.  The Analytic Hierarchy Process as a Means for Integrated Watershed Management , 2003 .

[19]  Gavin R. Finnie,et al.  Prioritizing software development productivity factors using the analytic hierarchy process , 1993, J. Syst. Softw..

[20]  Gregg Rothermel,et al.  Selecting a Cost-Effective Test Case Prioritization Technique , 2004, Software Quality Journal.

[21]  Gregg Rothermel,et al.  Test case prioritization: an empirical study , 1999, Proceedings IEEE International Conference on Software Maintenance - 1999 (ICSM'99). 'Software Maintenance for Business Change' (Cat. No.99CB36360).

[22]  Mary Lou Soffa,et al.  TimeAware test suite prioritization , 2006, ISSTA '06.

[23]  Gregg Rothermel,et al.  Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact , 2005, Empirical Software Engineering.

[24]  Kamal M. Al‐Subhi Al‐Harbi,et al.  Application of the AHP in project management , 2001 .

[25]  Mark Harman,et al.  Clustering test cases to achieve effective and scalable prioritisation incorporating expert knowledge , 2009, ISSTA.

[26]  Gregg Rothermel,et al.  Test Case Prioritization: A Family of Empirical Studies , 2002, IEEE Trans. Software Eng..

[27]  Anna Perini,et al.  Tool-supported requirements prioritization: Comparing the AHP and CBRank methods , 2009, Inf. Softw. Technol..

[28]  Gilberto Montibeller,et al.  Supporting the allocation of software development work in distributed teams with multi-criteria decision analysis , 2008 .

[29]  Roger N. Wabalickis Justification of FMS with the analytic hierarchy process , 1988 .

[30]  Gregg Rothermel,et al.  The Effects of Time Constraints on Test Case Prioritization: A Series of Controlled Experiments , 2010, IEEE Transactions on Software Engineering.

[31]  Evangelos Triantaphyllou,et al.  The impact of aggregating benefit and cost criteria in four MCDA methods , 2005, IEEE Transactions on Engineering Management.

[32]  Claes Wohlin,et al.  An evaluation of methods for prioritizing software requirements , 1998, Inf. Softw. Technol..

[33]  Gregg Rothermel,et al.  Modeling the cost-benefits tradeoffs for regression testing techniques , 2002, International Conference on Software Maintenance, 2002. Proceedings..

[34]  Gregg Rothermel,et al.  Prioritizing test cases for regression testing , 2000, ISSTA '00.