Validating the defect detection performance advantage of group designs for software reviews: report of a laboratory experiment using program code

It is widely accepted that software development technical reviews (SDTRs) are a useful technique for finding defects in software products. Recent debates centre around the need for review meetings (Porter and Votta 1994, Porter et al 1995, McCarthy et al 1996, Lanubile and Visaggio 1996). This paper presents the findings of an experiment that was conducted to investigate the performance advantage of interacting groups over average individuals and artificial (nominal) groups. We found that interacting groups outperform the average individuals and nominal groups. The source of performance advantage of interacting groups is not in finding defects, but rather in discriminating between true defects and false positives. The practical implication for this research is that nominal groups constitute an alternative review design in situations where individuals discover a low level of false positives.

[1]  M. E. Shaw Group dynamics : the psychology of small group behavior , 1971 .

[2]  Philip M. Johnson,et al.  Assessing software review meetings: A controlled experimental study using CSRS , 1997, Proceedings of the (19th) International Conference on Software Engineering.

[3]  E. Ramsden Group Process and Productivity , 1973 .

[4]  Stephen G. Eick,et al.  Estimating software fault content before coding , 1992, International Conference on Software Engineering.

[5]  Eliot R. Smith,et al.  Research methods in social relations , 1962 .

[6]  Chris Sauer,et al.  A Framework for Software Development Technical Reviews , 1994, Software Quality and Productivity.

[7]  M. J. Norušis,et al.  SPSS 6.1 Guide to Data Analysis , 1997 .

[8]  Gudmund J. W. Smith,et al.  The internal consistency of the Humm-Wadsworth Temperament Scale. , 1958 .

[9]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[10]  J. Davitz,et al.  A survey of studies contrasting the quality of group performance and individual performance, 1920-1957. , 1958, Psychological bulletin.

[11]  Thomas Gilb,et al.  Software Inspection , 1994 .

[12]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.

[13]  Wei-Tek Tsai,et al.  An experimental study of fault detection in user requirements documents , 1992, TSEM.

[14]  R BasiliVictor,et al.  Comparing Detection Methods for Software Requirements Inspections , 1995 .

[15]  Gerald M. Weinberg,et al.  Handbook of Walkthroughs, Inspections, and Technical Reviews: Evaluating Programs, Projects, and Products , 1990 .

[16]  Glenford J. Myers,et al.  A controlled experiment in program testing and code walkthroughs/inspections , 1978, CACM.

[17]  Adam A. Porter,et al.  An experiment to assess different defect detection methods for software requirements inspections , 1994, Proceedings of 16th International Conference on Software Engineering.

[18]  Tyler Welburn Structured COBOL: Fundamentals and style , 1981 .

[19]  Erik Kamsties,et al.  An Empirical Evaluation of Three Defect-Detection Techniques , 1995, ESEC.

[20]  Robert G. Ebenau,et al.  Software Inspection Process , 1993 .

[21]  Philip Yetton,et al.  Individual versus group problem solving: An empirical test of a best-member strategy , 1982 .

[22]  I. Steiner Group process and productivity , 1972 .

[23]  Adam A. Porter,et al.  Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment , 1995, IEEE Trans. Software Eng..

[24]  Chris Sauer,et al.  Validating the defect detection performance advantage of group designs for software reviews: report of a replicated experiment , 1997, Proceedings of Australian Software Engineering Conference ASWEC 97.

[25]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[26]  Harvey Siy,et al.  Identifying the mechanisms driving code inspection costs and benefits , 1996 .

[27]  Philip Yetton,et al.  Improving Group Performance by Training in Individual Problem Solving , 1987 .

[28]  Harvey P. Siy,et al.  An experiment to assess cost-benefits of inspection meetings and their alternatives: a pilot study , 1996, Proceedings of the 3rd International Software Metrics Symposium.

[29]  John C. Knight,et al.  An improved inspection technique , 1993, CACM.