An Experiment ot Assess the Cost-Benefits of Code Inspections in Large Scale Software Development

We conducted a long term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. The article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it; (2) to present our results and discuss their implications for both software practitioners and researchers; and (3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: (1) the number of reviewers on each inspection team (1, 2, or 4); (2) the number of teams inspecting the code unit (1 or 2); and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval.

[1]  John C. Knight,et al.  An improved inspection technique , 1993, CACM.

[2]  James R. Lyle,et al.  A Two-Person Inspection Method to Improve Prog ramming Productivity , 1989, IEEE Transactions on Software Engineering.

[3]  Kenneth H. Pollock,et al.  Modeling capture, recapture, and removal statistics for estimation of demographic parameters for fish and wildlife populations : Past, present, and future , 1991 .

[4]  Lawrence G. Votta,et al.  Assessing Software Designs Using Capture-Recapture Methods , 1993, IEEE Trans. Software Eng..

[5]  John M. Chambers,et al.  Graphical Methods for Data Analysis , 1983 .

[6]  Wei-Tek Tsai,et al.  An experimental study of fault detection in user requirements documents , 1992, TSEM.

[7]  Adam A. Porter,et al.  An experiment to assess different defect detection methods for software requirements inspections , 1994, Proceedings of 16th International Conference on Software Engineering.

[8]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.

[9]  Watts S. Humphrey,et al.  Managing the software process , 1989, The SEI series in software engineering.

[10]  David S. Rosenblum,et al.  A study in software process data capture and analysis , 1993, [1993] Proceedings of the Second International Conference on the Software Process-Continuous Software Process Improvement.

[11]  K. E. Martersteck,et al.  The 5ESS switching system: Introduction , 1985, AT&T Technical Journal.

[12]  S. Siegel,et al.  Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.

[13]  K. Burnham,et al.  Estimation of the size of a closed population when capture probabilities vary among animals , 1978 .

[14]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[15]  Stephen G. Eick,et al.  Estimating software fault content before coding , 1992, International Conference on Software Engineering.

[16]  Eliot R. Smith,et al.  Research methods in social relations , 1962 .

[17]  L. G. Votta,et al.  Organizational congestion in large-scale software development , 1994, Proceedings of the Third International Conference on the Software Process. Applying the Software Process.