Analyzing the relationships between inspections and testing to provide a software testing focus

Abstract Context Quality assurance effort, especially testing effort, is frequently a major cost factor during software development. Consequently, one major goal is often to reduce testing effort. One promising way to improve the effectiveness and efficiency of software quality assurance is the use of data from early defect detection activities to provide a software testing focus. Studies indicate that using a combination of early defect data and other product data to focus testing activities outperforms the use of other product data only. One of the key challenges is that the use of data from early defect detection activities (such as inspections) to focus testing requires a thorough understanding of the relationships between these early defect detection activities and testing. An aggravating factor is that these relationships are highly context-specific and need to be evaluated for concrete environments. Objective The underlying goal of this paper is to help companies get a better understanding of these relationships for their own environment, and to provide them with a methodology for finding relationships in their own environments. Method This article compares three different strategies for evaluating assumed relationships between inspections and testing. We compare a confidence counter, different quality classes, and the F-measure including precision and recall. Results One result of this case-study-based comparison is that evaluations based on the aggregated F-measures are more suitable for industry environments than evaluations based on a confidence counter. Moreover, they provide more detailed insights about the validity of the relationships. Conclusion We have confirmed that inspection results are suitable data for controlling testing activities. Evaluated knowledge about relationships between inspections and testing can be used in the integrated inspection and testing approach In2Test to focus testing activities. Product data can be used in addition. However, the assumptions have to be evaluated in each new context.

[1]  Arthur L. Price,et al.  Managing code inspection information , 1994, IEEE Software.

[2]  Frank Elberzhager,et al.  Reducing test effort: A systematic mapping study on existing approaches , 2012, Inf. Softw. Technol..

[3]  이상준,et al.  [서평]Cleanroom Software Engineering-Technology and Process , 2000 .

[4]  William Marsh,et al.  Predicting software defects in varying development lifecycles using Bayesian nets , 2007, Inf. Softw. Technol..

[5]  Dale Karolak,et al.  Software engineering risk management , 1995 .

[6]  Barry W. Boehm,et al.  What we have learned about fighting defects , 2002, Proceedings Eighth IEEE Symposium on Software Metrics.

[7]  Frank Elberzhager,et al.  Transparent combination of expert and measurement data for defect prediction: an industrial case study , 2010, 2010 ACM/IEEE 32nd International Conference on Software Engineering.

[8]  Khaled El Emam,et al.  The application of subjective estimates of effectiveness to controlling software inspections , 2000, J. Syst. Softw..

[9]  Frank Elberzhager,et al.  A systematic mapping study on the combination of static and dynamic quality assurance techniques , 2012, Inf. Softw. Technol..

[10]  Frank Elberzhager,et al.  Optimizing cost and quality by integrating inspection and test processes , 2011, ICSSP '11.

[11]  Natalia Juristo Juzgado,et al.  Guest Editors' Introduction: Software Testing Practices in Industry , 2006, IEEE Software.

[12]  Roger S. Pressman,et al.  Software Engineering: A Practitioner's Approach , 1982 .

[13]  Michele Lanza,et al.  An extensive comparison of bug prediction approaches , 2010, 2010 7th IEEE Working Conference on Mining Software Repositories (MSR 2010).

[14]  Claes Wohlin,et al.  State‐of‐the‐art: software inspections after 25 years , 2002, Softw. Test. Verification Reliab..

[15]  Tibor Gyimóthy,et al.  Empirical validation of object-oriented metrics on open source software for fault prediction , 2005, IEEE Transactions on Software Engineering.

[16]  Norman E. Fenton,et al.  Quantitative Analysis of Faults and Failures in a Complex Software System , 2000, IEEE Trans. Software Eng..

[17]  Roger Pressman,et al.  Software Engineering: A Practitioner's Approach, 7Th Edition , 2009 .

[18]  K. Goseva-Popstojanova,et al.  Common Trends in Software Fault and Failure Data , 2009, IEEE Transactions on Software Engineering.

[19]  Stacy J. Prowell,et al.  Cleanroom software engineering: technology and process , 1999 .

[20]  Mary Jean Harrold,et al.  Testing: a roadmap , 2000, ICSE '00.

[21]  Frank Elberzhager,et al.  Guiding Testing Activities by Predicting Defect-Prone Parts Using Product and Inspection Metrics , 2012, 2012 38th Euromicro Conference on Software Engineering and Advanced Applications.

[22]  Weichang Du,et al.  Knowledge-based Software Test Generation , 2009, SEKE.

[23]  Lionel C. Briand,et al.  A systematic and comprehensive investigation of methods to build and evaluate fault prediction models , 2010, J. Syst. Softw..

[24]  Andreas Zeller,et al.  Mining metrics to predict component failures , 2006, ICSE.