Evaluating a Causal Model of Review Factors in an Industrial Setting

Permission is granted to quote short excerpts and to reproduce figures and tables from this report, provided that the source of such material is fully acknowledged.Abstract Technical reviews are a cost-effective method commonly used to detect software defects early. To exploit their full potential, it is necessary to collect measurement data to constantly monitor and improve the implemented review procedure. This paper postulates a model of the factors that affect the number of defects detected during a technical review, and tests the model empirically using data from a large software development organization. The data set comes from more than 300 specification, design, and code reviews that were performed at Lucent's Product Realization Center for Optical Networking (PRC-ON) in Nuernberg, Germany. Since development projects within PRC-ON usually spend between 12% and 18% of the total development effort on reviews, it is essential to understand the relationships among the factors that determine review success. One major finding of this study is that the number of detected defects is primarily determined by the preparation effort of reviewers rather than the size of the reviewed artifact. In addition, the size of the reviewed artifact has only limited influence on review effort. Furthermore, we identified consistent ceiling effects in the relationship between size and effort with the number of defects detected. These results suggest that managers at PRC-ON must consider adequate preparation effort in their review planning to ensure high quality artifacts as well as a mature review process.

[1]  J. Elashoff,et al.  Multiple Regression in Behavioral Research. , 1974 .

[2]  Jacob Cohen,et al.  Applied multiple regression/correlation analysis for the behavioral sciences , 1979 .

[3]  William D. Berry,et al.  Multiple regression in practice , 1985 .

[4]  Dennis A. Christenson,et al.  Statistical Quality Control Applied to Code Inspections , 1990, IEEE J. Sel. Areas Commun..

[5]  Glen W. Russell,et al.  Experience with inspection in ultralarge-scale development , 1991, IEEE Software.

[6]  John C. Kelly,et al.  An analysis of defect densities found during software inspections , 1992, J. Syst. Softw..

[7]  Robert D. Retherford,et al.  Statistical Models for Causal Analysis: Retherford/Statistical , 1993 .

[8]  Raymond Madachy,et al.  Analysis of a successful inspection program , 1993 .

[9]  Peter J. Middleton,et al.  Software Inspection , 1994, J. Inf. Technol..

[10]  Harvey P. Siy,et al.  An experiment to assess the cost-benefits of code inspections in large scale software development , 1995, SIGSOFT '95.

[11]  Victor R. Basili,et al.  Evolving and packaging reading technologies , 1997, J. Syst. Softw..

[12]  Tzvi Raz,et al.  Factors affecting design inspection effectiveness in software development , 1997, Inf. Softw. Technol..

[13]  Adam A. Porter,et al.  Quality Time: What Makes Inspections Work? , 1997, IEEE Softw..

[14]  Victor R. Basili,et al.  Communication and Organization: An Empirical Study of Discussion in Inspection Meetings , 1998, IEEE Trans. Software Eng..

[15]  Audris Mockus,et al.  Understanding the sources of variation in software inspections , 1998, TSEM.

[16]  Christof Ebert,et al.  Improving reliability of large software systems , 1999, Ann. Softw. Eng..

[17]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[18]  Oliver Laitenberger,et al.  An encompassing life cycle centric survey of software inspection , 2000, J. Syst. Softw..

[19]  Bev Littlewood,et al.  N-version design Versus one Good Version , 2000 .