Indirect effects in evidential assessment: a case study on regression test technology adoption

Background: There is a need for efficient regression testing in most software development organizations. Often the proposed solutions involve automation. However, despite this being a well researched area, research results are rarely applied in industrial practice. Aim: In this paper we aim to bridge the gap between research and practice by providing examples of how evidence-based regression testing approaches can be adopted in industry. We also discuss challenges for the research community. Method: An industrial case study was carried out to evaluate the possibility to improve regression testing at Sony Ericsson Mobile Communications. We analyse the procedure undertaken based on frameworks from the evidence based software engineering, EBSE, paradigm (with a focus on the evidence) and automation literature (with a focus on the practical effects). Results: Our results pinpoint the need for systematic approaches when introducing a new tool. Practitioners and researchers need congruent guidelines supporting the appraisal of both the evidence base and the pragmatic effects, both direct but also indirect, of the changes. This is illustrated by the introduction of the automation perspective.

[1]  Mark Harman,et al.  Regression testing minimization, selection and prioritization: a survey , 2012, Softw. Test. Verification Reliab..

[2]  Gregg Rothermel,et al.  Analyzing Regression Test Selection Techniques , 1996, IEEE Trans. Software Eng..

[3]  Claes Wohlin,et al.  Context in industrial software engineering research , 2009, 2009 3rd International Symposium on Empirical Software Engineering and Measurement.

[4]  Per Runeson,et al.  Improving Regression Testing Transparency and Efficiency with History-Based Prioritization -- An Industrial Case Study , 2011, 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation.

[5]  Tore Dybå,et al.  Evidence-Based Software Engineering for Practitioners , 2005, IEEE Softw..

[6]  Sean McDonald,et al.  Software Test Automation , 1999 .

[7]  Austen Rainer,et al.  A preliminary empirical investigation of the use of evidence based software engineering by under-graduate students , 2006 .

[8]  Austen Rainer,et al.  A follow-up empirical evaluation of evidence based software engineering by undergraduate students , 2008, EASE.

[9]  Per Runeson,et al.  A systematic review on regression test selection techniques , 2010, Inf. Softw. Technol..

[10]  Gregg Rothermel,et al.  Test case prioritization: an empirical study , 1999, Proceedings IEEE International Conference on Software Maintenance - 1999 (ICSM'99). 'Software Maintenance for Business Change' (Cat. No.99CB36360).

[11]  Paul Clements,et al.  Software product lines - practices and patterns , 2001, SEI series in software engineering.

[12]  Per Runeson,et al.  A Qualitative Survey of Regression Testing Practices , 2010, PROFES.

[13]  Orit Hazzan,et al.  Agile software testing in a large-scale project , 2006, IEEE Software.

[14]  Kai Petersen,et al.  Waste and Lead Time Reduction in a Software Product Customization Process with Value Stream Maps , 2010, 2010 21st Australian Software Engineering Conference.

[15]  Markus Borg,et al.  Findability through Traceability - A Realistic Application of Candidate Trace Links? , 2012, ENASE.

[16]  Thomas B. Sheridan,et al.  Human and Computer Control of Undersea Teleoperators , 1978 .

[17]  Per Runeson,et al.  Test overlay in an emerging software product line - An industrial case study , 2013, Inf. Softw. Technol..

[18]  Christopher D. Wickens,et al.  A model for types and levels of human interaction with automation , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[19]  Y. Fazlalizadeh,et al.  Prioritizing test cases for resource constraint environments using historical test case performance data , 2009, 2009 2nd IEEE International Conference on Computer Science and Information Technology.