The Optimal Team Size for UML Design Inspections

Recent evidence indicates that the UML (Unified Modeling Language) is the most preferred and widely used modeling technique for object-oriented analysis and design. With UML becoming so popular, there is a need to have good quality assurance techniques for projects using it. Our focus in this study is on the inspections of UML design documents. The basic premise of software inspections is that they detect and remove defects before they propagate to subsequent development phases, where their detection and correction costs escalate. However, the performance of inspections can vary considerably, making it important to optimize inspections. One approach for optimizing inspections is by controlling the inspection team size. This paper presents an empirical evaluation of the optimal team size for UML design inspections. Our results show that there is no single optimal team size. Optimal team size in fact depends on various conditions such as the cost of defect detection late in the process, and inspection meeting duration. This paper quantifies these factors and proposes optimal team sizes under various conditions. Our results also indicate strongly that contemporary suggestions of only two-person inspection teams are far from optimal.

[1]  P. Yetton,et al.  The relationships among group size, member ability, social decision schemes, and performance , 1983 .

[2]  James Miller,et al.  Estimating the number of remaining defects after inspection , 1999, Softw. Test. Verification Reliab..

[3]  Claes Wohlin,et al.  A subjective effort estimation experiment , 1997, Inf. Softw. Technol..

[4]  Michael Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[5]  Harvey P. Siy,et al.  An experiment to assess the cost-benefits of code inspections in large scale software development , 1995, SIGSOFT '95.

[6]  Harvey P. Siy,et al.  A Review of Software Inspections , 1995, Adv. Comput..

[7]  M. Diehl,et al.  Productivity loss in brainstorming groups: Toward the solution of a riddle. , 1987 .

[8]  Edward F. Weller,et al.  Lessons from three years of inspection data (software development) , 1993, IEEE Software.

[9]  Colin Atkinson,et al.  Adapting the Fusion Process to Support the Unified Modeling Language , 1998 .

[10]  Robert B. Grady,et al.  Practical Software Metrics for Project Management and Process Improvement , 1992 .

[11]  B BisantDavid,et al.  A Two-Person Inspection Method to Improve Programming Productivity , 1989 .

[12]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[13]  Peter J. Middleton,et al.  Software Inspection , 1994, J. Inf. Technol..

[14]  Maliha S. Nash,et al.  Handbook of Parametric and Nonparametric Statistical Procedures , 2001, Technometrics.

[15]  Lionel C. Briand,et al.  A Comprehensive Evaluation of Capture-Recapture Models for Estimating Software Defect Content , 2000, IEEE Trans. Software Eng..

[16]  Lionel C. Briand,et al.  Using simulation to build inspection efficiency benchmarks for development projects , 1998, Proceedings of the 20th International Conference on Software Engineering.

[17]  Chris Dollin,et al.  Object-oriented development: the fusion method , 1994 .

[18]  Tom Adams A formula for the re-inspection decision , 1999, SOEN.

[19]  Ivar Jacobson,et al.  The Unified Modeling Language User Guide , 1998, J. Database Manag..

[20]  Lionel C. Briand,et al.  Modelling the Factors Driving the Quality of Meetings in the Software Development Process , 1999 .

[21]  Colin Atkinson,et al.  An experimental comparison of reading techniques for defect detection in UML design documents , 2000, J. Syst. Softw..

[22]  Khaled El Emam,et al.  An Internally Replicated Quasi-Experimental Comparison of Checklist and Perspective-Based Reading of Code Documents , 2001, IEEE Trans. Software Eng..

[23]  Adam A. Porter,et al.  Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment , 1995, IEEE Trans. Software Eng..

[24]  Claes Wohlin,et al.  An experimental study of individual subjective effort estimations and combinations of the estimates , 1998, Proceedings of the 20th International Conference on Software Engineering.

[25]  J. Friedman A VARIABLE SPAN SMOOTHER , 1984 .

[26]  A. Tenbrunsel,et al.  Organizational Behavior and Human Decision Processes , 2013 .

[27]  S. Harkins,et al.  Effects of task difficulty and task uniqueness on social loafing. , 1982 .

[28]  Wei-Tek Tsai,et al.  An experimental study of fault detection in user requirements documents , 1992, TSEM.

[29]  E. P. Doolan,et al.  Experience with Fagan's inspection method , 1992, Softw. Pract. Exp..

[30]  Stephen G. Eick,et al.  Estimating software fault content before coding , 1992, International Conference on Software Engineering.

[31]  Lionel C. Briand,et al.  Assessing the cost-effectiveness of inspections by combining project data and expert opinion , 2000, Proceedings 11th International Symposium on Software Reliability Engineering. ISSRE 2000.

[32]  Scott N. Woodfield,et al.  Evaluating the effectiveness of reliability-assurance techniques , 1989, J. Syst. Softw..

[33]  楠本 真二,et al.  Quantitative Evaluation of Software Reviews and Testing Processes , 1993 .

[34]  Robert G. Ebenau,et al.  Software Inspection Process , 1993 .

[35]  Chris Sauer,et al.  Validating the defect detection performance advantage of group designs for software reviews: report of a laboratory experiment using program code , 1997, ESEC '97/FSE-5.

[36]  Raymond Madachy,et al.  Analysis of a successful inspection program , 1993 .

[37]  Horst Remus,et al.  Integrated software validation in the view of inspections/reviews , 1984 .

[38]  Ivar Jacobson,et al.  The Unified Software Development Process , 1999 .

[39]  A. Frank Ackerman,et al.  Software inspections: an effective verification process , 1989, IEEE Software.

[40]  Michael J. Flaherty,et al.  Review of Practical software metrics for project management and process improvement by Robert B. Grady, Prentice Hall, Englewood Cliffs 1992 , 1993 .

[41]  Philip M. Johnson Reengineering inspection , 1998, CACM.

[42]  Jacob Cohen Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.

[43]  W. Cleveland,et al.  Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting , 1988 .

[44]  John C. Kelly,et al.  An analysis of defect densities found during software inspections , 1992, J. Syst. Softw..

[45]  Oliver Laitenberger,et al.  An encompassing life cycle centric survey of software inspection , 2000, J. Syst. Softw..

[46]  Chris Sauer,et al.  Technical Reviews: A Behaviorally Motivated Program of Research , 2022 .

[47]  James Miller Estimating the number of remaining defects after inspection , 1999 .

[48]  Helmut Lamm,et al.  Group versus individual performance on tasks requiring ideational proficiency , 1973 .

[49]  Bill C. Hardgrave,et al.  Object-oriented methods: current practices and attitudes , 1999, J. Syst. Softw..

[50]  Khaled El Emam,et al.  The application of subjective estimates of effectiveness to controlling software inspections , 2000, J. Syst. Softw..

[51]  Arthur L. Price,et al.  Managing code inspection information , 1994, IEEE Software.

[52]  James R. Lyle,et al.  A Two-Person Inspection Method to Improve Prog ramming Productivity , 1989, IEEE Transactions on Software Engineering.

[53]  A Chao,et al.  Estimating population size for capture-recapture data when capture probabilities vary by time and individual animal. , 1992, Biometrics.

[54]  Frank Bomarius,et al.  COBRA: a hybrid method for software cost estimation, benchmarking, and risk assessment , 1998, Proceedings of the 20th International Conference on Software Engineering.

[55]  Michael E. Fagan Advances in software inspections , 1986, IEEE Transactions on Software Engineering.

[56]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.