Comparing Inspection Methods using Controlled Experiments

Objective: In this paper we present an empirical study that was aimed at comparing three software inspection methods, in terms of needed time, precision, and recall values. The main objective of this study is to provide software engineers with some insight into choosing the inspection method to adopt. Method: We conducted a controlled experiment and a replication. These experiments involved 48 Master students in Computer Science at the University of Salerno. In the experiments, 6 academic researchers were also involved. The students had to discover defects within a software artefact using inspection methods that differ in terms of discipline and flexibility. In particular, we selected a disciplined but not flexible method (the Fagan's process), a disciplined and flexible method (a virtual inspection), and a flexible but not disciplined method (the pair inspection). Results: We observed a significant difference in favour of the Pair Inspection method for the time spent to perform the tasks. The data analysis also revealed a significant difference in favour of the Fagan's inspection process for precision. Finally, the effect of the inspection method on the recall is not significant. Conclusions: The empirical investigation showed that the discipline and flexibility of an inspection method affect both the time needed to identify defects and the precision of the inspection results. In particular, more flexible methods require less time to inspect a software artefact, while more disciplined methods enable the identification of a lower number of false defects.

[1]  Winifred Menezes,et al.  Marketing Technology to Software Practitioners , 2000, IEEE Softw..

[2]  Diane K. Michelson,et al.  Applied Statistics for Engineers and Scientists , 2001, Technometrics.

[3]  Filippo Lanubile,et al.  Tool support for geographically dispersed inspection teams , 2003, Softw. Process. Improv. Pract..

[4]  Shari Lawrence Pfleeger,et al.  Preliminary Guidelines for Empirical Research in Software Engineering , 2002, IEEE Trans. Software Eng..

[5]  Forrest Shull,et al.  Building Knowledge through Families of Experiments , 1999, IEEE Trans. Software Eng..

[6]  Peter M. Chisnall,et al.  Questionnaire Design, Interviewing and Attitude Measurement , 1993 .

[7]  James Miller,et al.  A process for asynchronous software inspection , 1997, Proceedings Eighth IEEE International Workshop on Software Technology and Engineering Practice incorporating Computer Aided Software Engineering.

[8]  Victor R. Basili,et al.  Experimentation in software engineering , 1986, IEEE Transactions on Software Engineering.

[9]  John C. Knight,et al.  An improved inspection technique , 1993, CACM.

[10]  Takuya Yamashita,et al.  Evaluation of Jupiter: A Lightweight Code Review Framework , 2006 .

[11]  F. Lanubile,et al.  Tool Support for Geographically Dispersed Inspection Teams Research Section , 2004 .

[12]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[13]  Ilkka Tervonen,et al.  How to inspect minor software projects , 2000 .

[14]  Giuseppe Scanniello,et al.  Integrating a Distributed Inspection Tool Within an Artefact Management System , 2007, ICSOFT.

[15]  Gerald M. Weinberg,et al.  Handbook of Walkthroughs, Inspections, and Technical Reviews: Evaluating Programs, Projects, and Products , 1990 .

[16]  Lasse Harjumaa,et al.  Experiences of painless improvements in software inspection , 1999 .

[17]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[18]  Chris Sauer,et al.  Technical Reviews: A Behaviorally Motivated Program of Research , 2022 .

[19]  Lasse Harjumaa,et al.  Software inspection - a blend of discipline and flexibility , 1998 .