Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment

Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s). Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate. We evaluated this hypothesis using a 3/spl times/2/sup 4/ partial factorial, randomized experimental design. Forty eight graduate students in computer science participated in the experiment. They were assembled into sixteen, three-person teams. Each team inspected two SRS using some combination of Ad Hoc, Checklist or Scenario methods. For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults first identified at the collection meeting (meeting gain rate), and (4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate). The experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Scenario reviewers were more effective at detecting the faults their scenarios are designed to uncover, and were no less effective at detecting other faults than both Ad Hoc or Checklist reviewers, (3) Checklist reviewers were no more effective than Ad Hoc reviewers, and (4) Collection meetings produced no net improvement in the fault detection rate-meeting gains were offset by meeting losses. >

[1]  Dan Craigen,et al.  Experience with formal methods in critical systems , 1994, IEEE Software.

[2]  A. R. Ilersic,et al.  Research methods in social relations , 1961 .

[3]  Stephen G. Eick,et al.  Estimating software fault content before coding , 1992, International Conference on Software Engineering.

[4]  Eliot R. Smith,et al.  Research methods in social relations , 1962 .

[5]  William G. Wood,et al.  Temporal Logic Case Study , 1990, Automatic Verification Methods for Finite State Systems.

[6]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[7]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[8]  Lawrence G. Votta,et al.  Does every inspection need a meeting? , 1993, SIGSOFT '93.

[9]  Richard M. Heiberger Computation For The Analysis of Designed Experiments , 1989 .

[10]  Sidney Addelman,et al.  trans-Dimethanolbis(1,1,1-trifluoro-5,5-dimethylhexane-2,4-dionato)zinc(II) , 2008, Acta crystallographica. Section E, Structure reports online.

[11]  Victor R. Basili,et al.  Evaluation of a software requirements document by analysis of change data , 1981, ICSE '81.

[12]  Wei-Tek Tsai,et al.  An experimental study of fault detection in user requirements documents , 1992, TSEM.

[13]  Mark A. Ardis,et al.  Lessons from using Basic LOTOS , 1994, Proceedings of 16th International Conference on Software Engineering.

[14]  Watts S. Humphrey,et al.  Managing the software process , 1989, The SEI series in software engineering.

[15]  S. Siegel,et al.  Nonparametric Statistics for the Behavioral Sciences , 2022, The SAGE Encyclopedia of Research Design.

[16]  Kathryn L. Heninger Specifying Software Requirements for Complex Systems: New Techniques and Their Application , 2001, IEEE Transactions on Software Engineering.