Gap between academia and industry: a case of empirical evaluation of three software testing methods

Doing the right kind of testing has always been one of main challenging and a decisive task for industry. To choose right software testing method(s), industry needs to have an exact objective knowledge of their effectiveness, efficiency, and applicability conditions. The most common way to evaluate testing methods, for such knowledge, is with empirical studies. Reliable and comprehensive evidence can be obtained by aggregating the results of different empirical studies (family of experiments) taking into account their findings and limitations. We conducted a study to investigate the current state of the art of empirical knowledge base of three testing methods. We found that although the empirical studies conducted so far to evaluate testing methods contain many important and interesting results; however, we still lack factual and generalizable knowledge about performance and applicability conditions of testing methods(s), making it unfeasible to be readily adopted by the industry. Moreover, we tried to identify the major factors responsible for limiting academia from producing significantly reliable results having an industrial impact. We believe that besides effective and long-term academia-industry collaboration, there is a need for more systematic, quantifiable and comprehensive empirical studies (which provides scope for aggregation using rigorous techniques), mainly replications so as to create an effective and applicable knowledge base about testing methods which potentially can fill the gap between academia and industry.

[1]  Natalia Juristo Juzgado,et al.  In Search of What We Experimentally Know about Unit Testing , 2006, IEEE Software.

[2]  Barbara A. Kitchenham,et al.  The role of replications in empirical software engineering—a word of warning , 2008, Empirical Software Engineering.

[3]  Diane Kelly,et al.  A case study in the use of defect classification in inspections , 2001, CASCON.

[4]  Tony Gorschek,et al.  How to Increase the Likelihood of Successful Transfer to Industry -- Going Beyond the Empirical , 2015, 2015 IEEE/ACM 3rd International Workshop on Conducting Empirical Studies in Industry.

[5]  Natalia Juristo Juzgado,et al.  Reviewing 25 Years of Testing Technique Experiments , 2004, Empirical Software Engineering.

[6]  Natalia Juristo Juzgado,et al.  Understanding replication of experiments in software engineering: A classification , 2014, Inf. Softw. Technol..

[7]  Robert B. Grady,et al.  Practical Software Metrics for Project Management and Process Improvement , 1992 .

[8]  Jeffrey C. Carver,et al.  Knowledge-Sharing Issues in Experimental Software Engineering , 2004, Empirical Software Engineering.

[9]  Forrest Shull,et al.  Building Knowledge through Families of Experiments , 1999, IEEE Trans. Software Eng..

[10]  Cem Kaner,et al.  Testing Computer Software , 1988 .

[11]  Natalia Juristo Juzgado,et al.  Comparing the Effectiveness of Equivalence Partitioning, Branch Testing and Code Reading by Stepwise Abstraction Applied by Subjects , 2012, 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation.

[12]  Vahid Garousi,et al.  What industry wants from academia in software testing?: Hearing practitioners' opinions , 2017, EASE.

[13]  Winifred Menezes,et al.  Marketing Technology to Software Practitioners , 2000, IEEE Softw..

[14]  Terry Blevins,et al.  Bridging the gap between academia and industry , 2009 .

[15]  Claes Wohlin Empirical software engineering research with industry: Top 10 challenges , 2013, 2013 1st International Workshop on Conducting Empirical Studies in Industry (CESI).

[16]  Tony Gorschek,et al.  A Model for Technology Transfer in Practice , 2006, IEEE Software.

[17]  Sheikh Umar Farooq,et al.  Empirical evaluation of software testing techniques in an open source fashion , 2014, CESI 2014.

[18]  James Miller Can results from software engineering experiments be safely combined? , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[19]  Natalia Juristo Juzgado,et al.  Reporting experiments to satisfy professionals’ information needs , 2014, Empirical Software Engineering.

[20]  William C. Hetzel,et al.  An experimental analysis of program verification methods. , 1976 .

[21]  James Miller,et al.  An empirical evaluation of defect detection techniques , 1997, Inf. Softw. Technol..

[22]  Natalia Juristo Juzgado,et al.  Functional Testing, Structural Testing, and Code Reading: What Fault Type Do They Each Detect? , 2003, ESERNET.

[23]  Omar S. Gómez,et al.  Efficiency of Software Testing Techniques: A Controlled Experiment Replication and Network Meta-analysis , 2017, e Informatica Softw. Eng. J..

[24]  Tore Dybå,et al.  The Future of Empirical Methods in Software Engineering Research , 2007, Future of Software Engineering (FOSE '07).

[25]  Vahid Garousi,et al.  Worlds Apart: Industrial and Academic Focus Areas in Software Testing , 2017, IEEE Software.

[26]  Richard Torkar,et al.  An Initiative to Improve Reproducibility and Empirical Evaluation of Software Testing Techniques , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[27]  Martin J. Shepperd,et al.  Replication Studies Considered Harmful , 2018, 2018 IEEE/ACM 40th International Conference on Software Engineering: New Ideas and Emerging Technologies Results (ICSE-NIER).

[28]  Giancarlo Succi,et al.  Effort Prediction in Iterative Software Development Processes -- Incremental Versus Global Prediction Models , 2007, ESEM 2007.

[29]  Sheikh Umar Farooq,et al.  A replicated empirical study to evaluate software testing methods , 2017, J. Softw. Evol. Process..

[30]  Natalia Juristo Juzgado,et al.  A Look at 25 Years of Data , 2009, IEEE Software.

[31]  Per Runeson,et al.  What do we know about defect detection methods? [software testing] , 2006, IEEE Software.

[32]  Natalia Juristo Juzgado,et al.  Basics of Software Engineering Experimentation , 2010, Springer US.

[33]  Victor R. Basili,et al.  Comparing the Effectiveness of Software Testing Strategies , 1987, IEEE Transactions on Software Engineering.

[34]  Natalia Juristo Juzgado,et al.  Crossover Designs in Software Engineering Experiments: Benefits and Perils , 2016, IEEE Transactions on Software Engineering.

[35]  Robert V. Binder,et al.  Testing Object-Oriented Systems: Models, Patterns, and Tools , 1999 .

[36]  Jeffrey C. Carver,et al.  Replicating software engineering experiments: addressing the tacit knowledge problem , 2002, Proceedings International Symposium on Empirical Software Engineering.

[37]  Sven Apel,et al.  Views on Internal and External Validity in Empirical Software Engineering , 2015, 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering.

[38]  Boris Beizer,et al.  Software Testing Techniques , 1983 .

[39]  Santiago Matalonga,et al.  A Controlled Experiment to Explore Potentially Undetectable Defects for Testing Techniques , 2014, SEKE.

[40]  James Miller,et al.  Applying meta-analytical procedures to software engineering experiments , 2000, J. Syst. Softw..

[41]  Claes Wohlin,et al.  Using Students as Subjects—A Comparative Study of Students and Professionals in Lead-Time Impact Assessment , 2000, Empirical Software Engineering.

[42]  Tony Gorschek,et al.  A method for evaluating rigor and industrial relevance of technology evaluations , 2011, Empirical Software Engineering.

[43]  Dietmar Pfahl,et al.  Reporting Experiments in Software Engineering , 2008, Guide to Advanced Empirical Software Engineering.

[44]  Glenford J. Myers,et al.  A controlled experiment in program testing and code walkthroughs/inspections , 1978, CACM.

[45]  Óscar Dieste Tubío,et al.  Effectiveness for detecting faults within and outside the scope of testing techniques: an independent replication , 2013, Empirical Software Engineering.

[46]  Walter F. Tichy,et al.  Hints for Reviewing Empirical Work in Software Engineering , 2000, Empirical Software Engineering.

[47]  H. Dieter Rombach,et al.  The Maturation of Empirical Studies , 2015, 2015 IEEE/ACM 3rd International Workshop on Conducting Empirical Studies in Industry.

[48]  L. C. Briand A Critical Analysis of Empirical Research in Software Testing , 2007, ESEM 2007.

[49]  Christopher M. Lott,et al.  Repeatable software engineering experiments for comparing defect-detection techniques , 2004, Empirical Software Engineering.

[50]  Vahid Garousi,et al.  Challenges and best practices in industry-academia collaborations in software engineering: A systematic literature review , 2016, Inf. Softw. Technol..

[51]  Erik Kamsties,et al.  An Empirical Evaluation of Three Defect-Detection Techniques , 1995, ESEC.

[52]  Jeffrey C. Carver,et al.  The role of replications in Empirical Software Engineering , 2008, Empirical Software Engineering.

[53]  Natalia Juristo Juzgado,et al.  Determining the effectiveness of three software evaluation techniques through informal aggregation , 2013, Inf. Softw. Technol..

[54]  Fonseca Carrera,et al.  Definition of a support infrastructure for replicating and aggregating families of software engineering experiments , 2012 .