Comparing Detection Methods For Software Requirements Inspections: A Replication Using Professional Subjects

Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s).Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate.In previous work we evaluated this hypothesis using 48 graduate students in computer science as subjects.We now have replicated this experiment using 18 professional developers from Lucent Technologies as subjects. Our goals were to (1) extend the external credibility of our results by studying professional developers, and to (2) compare the performances of professionals with that of the graduate students to better understand how generalizable the results of the less expensive student experiments were.For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults first identified at the collection meeting (meeting gain rate), and (4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate).For both the professionals and the students the experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Checklist reviewers were no more effective than Ad Hoc reviewers, (3) Collection meetings produced no net improvement in the fault, and detection rate—meeting gains were offset by meeting losses,Finally, although specific measures differed between the professional and student populations, the outcomes of almost all statistical tests were identical. This suggests that the graduate students provided an adequate model of the professional population and that the much greater expense of conducting studies with professionals may not always be required.

[1]  Michael E. Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[2]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[3]  Dan Craigen,et al.  Experience with formal methods in critical systems , 1994, IEEE Software.

[4]  A. R. Ilersic,et al.  Research methods in social relations , 1961 .

[5]  Mark A. Ardis,et al.  Lessons from using Basic LOTOS , 1994, Proceedings of 16th International Conference on Software Engineering.

[6]  Wei-Tek Tsai,et al.  An experimental study of fault detection in user requirements documents , 1992, TSEM.

[7]  William G. Wood,et al.  Temporal Logic Case Study , 1990, Automatic Verification Methods for Finite State Systems.

[8]  Stephen G. Eick,et al.  Estimating software fault content before coding , 1992, International Conference on Software Engineering.

[9]  Eliot R. Smith,et al.  Research methods in social relations , 1962 .

[10]  Kathryn L. Heninger Specifying Software Requirements for Complex Systems: New Techniques and Their Application , 2001, IEEE Transactions on Software Engineering.

[11]  Richard M. Heiberger Computation For The Analysis of Designed Experiments , 1989 .

[12]  Victor R. Basili,et al.  Evaluation of a software requirements document by analysis of change data , 1981, ICSE '81.

[13]  Adam A. Porter,et al.  Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment , 1995, IEEE Trans. Software Eng..

[14]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.

[15]  Sidney Addelman,et al.  trans-Dimethanolbis(1,1,1-trifluoro-5,5-dimethylhexane-2,4-dionato)zinc(II) , 2008, Acta crystallographica. Section E, Structure reports online.

[16]  Watts S. Humphrey,et al.  Managing the software process , 1989, The SEI series in software engineering.

[17]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[18]  Annabeth L. Propst Understanding industrial experimentation , 1988 .