A controlled experiment for evaluating a metric-based reading technique for requirements inspection

Natural language requirements documents are often verified by means of some reading technique. Some recommendations for defining a good reading technique point out that a concrete technique must not only be suitable for specific classes of defects, but also for a concrete notation in which requirements are written. Following this suggestion, we have proposed a metric-based reading (MBR) technique used for requirements inspections, whose main goal is to identify specific types of defects in use cases. The systematic approach of MBR is basically based on a set of rules as "if the metric value is too low (or high) the presence of defects of type de fType/sub 1/,...de fType/sub n/ must be checked". We hypothesised that if the reviewers know these rules, the inspection process is more effective and efficient, which means that the defects detection rate is higher and the number of defects identified per unit of time increases. But this hypotheses lacks validity if it is not empirically validated. For that reason the main goal is to describe a controlled experiment we carried out to ascertain if the usage of MBR really helps in the detection of defects in comparison with a simple checklist technique. The experiment result revealed that MBR reviewers were more effective at detecting defects than checklist reviewers, but they were not more efficient, because MBR reviewers took longer than checklist reviewers on average.

[1]  Shari Lawrence Pfleeger,et al.  Preliminary Guidelines for Empirical Research in Software Engineering , 2002, IEEE Trans. Software Eng..

[2]  Forrest Shull,et al.  Building Knowledge through Families of Experiments , 1999, IEEE Trans. Software Eng..

[3]  Marvin V. Zelkowitz,et al.  Experimental validation in software engineering , 1997, Inf. Softw. Technol..

[4]  James Miller,et al.  Further Experiences with Scenarios and Checklists , 1998, Empirical Software Engineering.

[5]  Bente Anda,et al.  Towards an inspection technique for use case models , 2002, SEKE '02.

[6]  David Lorge Parnas,et al.  Active design reviews: principles and practices , 1985, ICSE '85.

[7]  Miguel Toro,et al.  Verifying software requirements with XSLT , 2002, SOEN.

[8]  Michael Fagan Design and Code Inspections to Reduce Errors in Program Development , 1976, IBM Syst. J..

[9]  Giuseppe Visaggio,et al.  A Replicated Experiment to Assess Requirements Inspection Techniques , 2004, Empirical Software Engineering.

[10]  Marcela Genero,et al.  METRICS FOR USE CASES: A SURVEY OF CURRENT PROPOSALS , 2005 .

[11]  Beatriz Bernárdez Empirical assessment of a defect detection technique for use cases , 2004, ICSE 2004.

[12]  Rafael Corchuelo,et al.  Supporting requirements verification using XSLT , 2002, Proceedings IEEE Joint International Conference on Requirements Engineering.

[13]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[14]  Erik Kamsties,et al.  A Framework for Evaluating System and Software Requirements Specification Approaches , 1997, Requirements Targeting Software and Systems Engineering.

[15]  Sandro Morasca,et al.  Defining and Validating Measures for Object-Based High-Level Design , 1999, IEEE Trans. Software Eng..

[16]  Forrest Shull,et al.  The empirical investigation of Perspective-Based Reading , 1995, Empirical Software Engineering.

[17]  Ana M. Moreno,et al.  Lecture Notes on Empirical Software Engineering , 2003, Series on Software Engineering and Knowledge Engineering.

[18]  Claes Wohlin,et al.  Evaluation of Usage-Based Reading—Conclusions after Three Experiments , 2004, Empirical Software Engineering.

[19]  Marcela Genero,et al.  Empirical Evaluation and Review of a Metrics-Based Approach for Use Case Verification , 2004, J. Res. Pract. Inf. Technol..

[20]  Adam A. Porter,et al.  Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment , 1995, IEEE Trans. Software Eng..

[21]  Claes Wohlin,et al.  Experimentation in software engineering: an introduction , 2000 .

[22]  Forrest Shull,et al.  Experimenting with error abstraction in requirements documents , 1998, Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262).