An Empirical Comparative Study of Checklist based and Ad Hoc Code Reading Techniques in a Distributed Groupware Environment

Software inspection is a necessary and important tool for software quality assurance. Since it was introduced by Fagan at IBM in 1976, arguments exist as to which method should be adopted to carry out the exercise, whether it should be paper-based or toolbased, and what reading technique should be used on the inspection document. Extensive works have been done to determine the effectiveness of reviewers in paper-based environment when using ad hoc and checklist reading techniques. In this work, we take the software inspection research further by examining whether there is going to be any significant difference in defect detection effectiveness of reviewers when they use either ad hoc or checklist reading techniques in a distributed groupware environment. Twenty final year undergraduate students of computer science, divided into ad hoc and checklist reviewers groups of ten members each were employed to inspect a mediumsized java code synchronously on groupware deployed on the Internet. The data obtained were subjected to tests of hypotheses using independent t-test and correlation coefficients. Results from the study indicate that there are no significant differences in the defect detection effectiveness, effort in terms of time taken in minutes and false positives reported by the reviewers using either ad hoc or checklist based reading techniques in the distributed groupware environment studied.

[1]  Craig K. Tyran A Software Inspection Exercise for the Systems Analysis and Design Course , 2006, J. Inf. Syst. Educ..

[2]  Robert Tibshirani,et al.  An Introduction to the Bootstrap , 1994 .

[3]  Susan A. Murphy,et al.  Monographs on statistics and applied probability , 1990 .

[4]  Marc Roper,et al.  Practical Code Inspection for Object- Oriented Systems , 2001 .

[5]  Laurie A. Williams,et al.  Preliminary results on using static analysis tools for software inspection , 2004, 15th International Symposium on Software Reliability Engineering.

[6]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.

[7]  Adam A. Porter,et al.  Empirical studies of software engineering: a roadmap , 2000, ICSE '00.

[8]  Shinji Kusumoto,et al.  An experimental comparison of checklist-based reading and perspective-based reading for UML design document inspection , 2002, Proceedings International Symposium on Empirical Software Engineering.

[9]  Stefan Biffl,et al.  Investigating the Defect Detection Effectiveness and Cost Benefit of Nominal Inspection Teams , 2003, IEEE Trans. Software Eng..

[10]  Brett Kyle Successful Industrial Experimentation , 1995 .

[11]  Shari Lawrence Pfleeger,et al.  Software Engineering: The Production of Quality Software , 1987 .

[12]  J. Miller,et al.  A Comparison of Tool-Based and Paper-Based Software Inspection , 1998, Empirical Software Engineering.

[13]  Adam E. Irgon,et al.  Knowledge-Based Code Inspection with ICICLE , 1992, IAAI.

[14]  Gérard Memmi,et al.  Scrutiny: A Collaborative Inspection and Review System , 1993, ESEC.

[15]  Michael E. Fagan Advances in software inspections , 1986, IEEE Transactions on Software Engineering.

[16]  Philip M. Johnson,et al.  Does Every Inspection Really Need a Meeting? , 1998, Empirical Software Engineering.

[17]  Thomas Gilb,et al.  Software Inspection , 1994 .

[18]  Lasse Harjumaa,et al.  Virtual Software Inspections over the Internet , 2000, ICSE 2000.

[19]  Bill Brykczynski,et al.  Software inspection : an industry best practice , 1996 .

[20]  Adam A. Porter,et al.  Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment , 1995, IEEE Trans. Software Eng..

[21]  Lionel C. Briand,et al.  Quantitative evaluation of capture-recapture models to control software inspections , 1997, Proceedings The Eighth International Symposium on Software Reliability Engineering.

[22]  Susan Wiedenbeck,et al.  Empirical studies of software engineering , 2004, Int. J. Hum. Comput. Stud..

[23]  Wei-Tek Tsai,et al.  Distributed, collaborative software inspection , 1993, IEEE Software.

[24]  Oliver Laitenberger,et al.  An encompassing life cycle centric survey of software inspection , 2000, J. Syst. Softw..

[25]  Adam A. Porter,et al.  Assessing Software Review Meetings: Results of a Comparative Analysis of Two Experimental Studies , 1997, IEEE Trans. Software Eng..

[26]  O. Laitenberger A Survey of Software Inspection Technologies , 2001 .

[27]  John W. Gintell,et al.  Lessons learned by building and using Scrutiny, a collaborative software inspection system , 1995, Proceedings Seventh International Workshop on Computer-Aided Software Engineering.

[28]  Claes Wohlin,et al.  Increasing the Understanding of Effectiveness in Software Inspections Using Published Data Sets , 2005, J. Res. Pract. Inf. Technol..

[29]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[30]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[31]  Lionel C. Briand,et al.  Using simulation to build inspection efficiency benchmarks for development projects , 1998, Proceedings of the 20th International Conference on Software Engineering.

[32]  Giuseppe Visaggio,et al.  Evaluating Defect Detection Techniques for Software Requirements Inspections , 2000 .

[33]  Vahid Mashayekhi,et al.  A Case Study of Distributed, Asynchronous Software Inspection , 1997, Proceedings of the (19th) International Conference on Software Engineering.