Revisiting the problem of using problem reports for quality assessment

In this paper, we describe our experience with using problem reports from industry for quality assessment. The non-uniform terminology used in problem reports and validity concerns have been subject of earlier research but are far from settled. To distinguish between terms such as defects or errors, we propose to answer three questions on the scope of a study related to what (problem appearance or its cause), where (problems related to software; executable or not; or system), and when (problems recorded in all development life cycles or some of them). Challenges in defining research questions and metrics, collecting and analyzing data, generalizing the results and reporting them are discussed. Ambiguity in defining problem report fields and missing, inconsistent or wrong data threatens the value of collected evidence. Some of these concerns could be settled by answering some basic questions related to the problem reporting fields and improving data collection routines and tools.

[1]  Reidar Conradi,et al.  An empirical study of software reuse vs. defect-density and stability , 2004, Proceedings. 26th International Conference on Software Engineering.

[2]  Harvey P. Siy,et al.  Predicting Fault Incidence Using Software Change History , 2000, IEEE Trans. Software Eng..

[3]  Reidar Conradi,et al.  Exploring Industrial Data Repositories: Where Software Development Approaches Meet , 2004 .

[4]  Cemal Yilmaz,et al.  Software Metrics , 2008, Wiley Encyclopedia of Computer Science and Engineering.

[5]  Reidar Conradi,et al.  Results and Experiences from an Empirical Study of Fault Reports in Industrial Projects , 2006, PROFES.

[6]  Tore Dybå,et al.  A systematic review of statistical power in software engineering experiments , 2006, Inf. Softw. Technol..

[7]  Elliot Soloway,et al.  Where the bugs are , 1985, CHI '85.

[8]  Ram Chillarege,et al.  Test and development process retrospective - a case study using ODC triggers , 2002, Proceedings International Conference on Dependable Systems and Networks.

[9]  Shari Lawrence Pfleeger,et al.  Software Metrics : A Rigorous and Practical Approach , 1998 .

[10]  Richard Baskerville,et al.  Generalizing Generalizability in Information Systems Research , 2003, Inf. Syst. Res..

[11]  Shari Lawrence Pfleeger,et al.  Software Quality: The Elusive Target , 1996, IEEE Softw..

[12]  Victor R. Basili,et al.  How reuse influences productivity in object-oriented systems , 1996, CACM.

[13]  Mira Kajko-Mattsson,et al.  Common concept apparatus within corrective software maintenance , 1999, Proceedings IEEE International Conference on Software Maintenance - 1999 (ICSM'99). 'Software Maintenance for Business Change' (Cat. No.99CB36360).

[14]  Robert L. Glass Predicting future maintenance cost, and how we're doing it wrong , 2002, IEEE Software.

[15]  Norman F. Schneidewind,et al.  Methodology For Validating Software Metrics , 1992, IEEE Trans. Software Eng..

[16]  Richard W. Selby,et al.  Enabling reuse-based software development of large-scale systems , 2005, IEEE Transactions on Software Engineering.

[17]  William A. Florac Software Quality Measurement: A Framework for Counting Problems and Defects , 1992 .

[18]  Mari Torgersrud Haug,et al.  An Emirical Study of Software Quality and Evolution in the Context of Software Reuse , 2005 .

[19]  Bernd G. Freimut Developing and using defect classification schemes , 2001 .

[20]  Victor R. Basili,et al.  Validation on an Approach for Improving Existing Measurement Frameworks , 2000, IEEE Trans. Software Eng..