Combining Data from Reading Experiments in Software Inspections: A Feasibility Study

Software inspections have been around for 25 years, and most software engineering researchers and professionals know that they are mostly a cost-effective means for removing software defects. However, this does not mean that there is consensus about how they should be conducted in terms of reading techniques, number of reviewers or the effectiveness of reviewers. Still, software inspections are probably the most extensively empirically studied technique in software engineering. Thus, a large body of knowledge is available in literature. This paper uses 30 data sets from software inspections found in the literature to study different aspects of software inspections. As a feasibility study, the data are amalgamated to increase our understanding and illustrate what could be achieved if we manage to conduct studies where a combination of data can be collected. It is shown how the combinated data may help to evaluate the influence of several different aspects, including reading techniques, team sizes and professionals vs. students. The objective is primarily to illustrate how more general knowledge may be gained by combining data from several studies. It is concluded that combining data is possible, although there are potential validity threats. Research results are examined with reference to software inspections on three levels: organization, project and individual.

[1]  A. Osborn Applied Imagination: Principles and Procedures of Creative Thinking , 1953 .

[2]  J. Hackman,et al.  Effects of size and task type on group performance and member reactions , 1970 .

[3]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[4]  James R. Lyle,et al.  A Two-Person Inspection Method to Improve Prog ramming Productivity , 1989, IEEE Transactions on Software Engineering.

[5]  Leonard M. Jessup,et al.  The Effects of Anonymity on GDSS Group Process with an Idea-Generating Task , 1990, MIS Q..

[6]  Wei-Tek Tsai,et al.  N-Fold inspection: a requirements analysis technique , 1990, Commun. ACM.

[7]  R. Bostrom,et al.  Evolution of group performance over time: A repeated measures study of GDSS effects , 1993 .

[8]  Robert G. Ebenau,et al.  Software Inspection Process , 1993 .

[9]  Edward F. Weller,et al.  Lessons from three years of inspection data (software development) , 1993, IEEE Software.

[10]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.

[11]  A. Dennis,et al.  When a Group Is Not a Group , 1993 .

[12]  John C. Knight,et al.  An improved inspection technique , 1993, CACM.

[13]  K. Wei,et al.  An Empirical Study of the Task Dimension of Group Support System , 1994, IEEE Trans. Syst. Man Cybern. Syst..

[14]  Peter J. Middleton,et al.  Software Inspection , 1994, J. Inf. Technol..

[15]  Alan R. Dennis,et al.  A Mathematical Model of Performance of Computer-Mediated Groups during Idea Generation , 1994, J. Manag. Inf. Syst..

[16]  Harvey P. Siy,et al.  An experiment to assess the cost-benefits of code inspections in large scale software development , 1995, SIGSOFT '95.

[17]  Adam A. Porter,et al.  Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment , 1995, IEEE Trans. Software Eng..

[18]  Claes Wohlin,et al.  An experimental evaluation of capture‐recapture in software inspections , 1995, Softw. Test. Verification Reliab..

[19]  G. Bist,et al.  Benchmarking in technical information , 1995 .

[20]  Marvin V. Zelkowitz,et al.  SEL's Software Process Improvement Program , 1995, IEEE Softw..

[21]  Computer Staff Software Challenges , 1995 .

[22]  Bill Brykczynski,et al.  Software inspection : an industry best practice , 1996 .

[23]  Bill Brykczynski,et al.  Software inspection : an industry best practice , 1996 .

[24]  K. Owens Software detailed technical reviews: finding and using defects , 1997, WESCON/97 Conference Proceedings.

[25]  Oliver Laitenberger,et al.  Perspective-based Reading of Code Documents at , 1997 .

[26]  Barbara A. Kitchenham,et al.  Combining empirical results in software engineering , 1998, Inf. Softw. Technol..

[27]  Michael Thompson,et al.  Changing the way we work: fundamentals of effective teams , 1998, IPCC 98. Contemporary Renaissance: Changing the Way we Communicate. Proceedings 1998 IEEE International Professional Communication Conference (Cat. No.98CH36332).

[28]  Lionel C. Briand,et al.  Using simulation to build inspection efficiency benchmarks for development projects , 1998, Proceedings of the 20th International Conference on Software Engineering.

[29]  Lawrence M. Corbett Benchmarking manufacturing performance in Australia and New Zealand , 1998 .

[30]  Pervaiz K. Ahmed,et al.  Integrated benchmarking: a holistic examination of select techniques for benchmarking analysis , 1998 .

[31]  Will Hayes,et al.  Research synthesis in software engineering: a case for meta-analysis , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[32]  James Miller Can results from software engineering experiments be safely combined? , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[33]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[34]  D. Longbottom Benchmarking in the UK: an empirical study of practitioners and academics , 2000 .

[35]  Chris Sauer,et al.  Technical Reviews: A Behaviorally Motivated Program of Research , 2022 .

[36]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[37]  Khaled El Emam,et al.  The Optimal Team Size for UML Design Inspections , 2000 .

[38]  Isabella Wieczorek,et al.  Applying Benchmarking to Learn from Best Practices , 2000, PROFES.

[39]  Khaled El Emam,et al.  An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents , 2001, IEEE Trans. Software Eng..

[40]  Stefan Biffl,et al.  Influence of team size and defect detection technique on inspection effectiveness , 2001, Proceedings Seventh International Software Metrics Symposium.

[41]  Philip M. Johnson,et al.  Does Every Inspection Really Need a Meeting? , 1998, Empirical Software Engineering.

[42]  Forrest Shull,et al.  The empirical investigation of Perspective-Based Reading , 1995, Empirical Software Engineering.

[43]  Per Runeson,et al.  Are the Perspectives Really Different? – Further Experimentation on Scenario-Based Reading of Requirements , 2000, Empirical Software Engineering.

[44]  Claes Wohlin,et al.  An Experimental Evaluation of an Experience-Based Capture-Recapture Method in Software Code Inspections , 1998, Empirical Software Engineering.