Using Object-Oriented Design Metrics to Predict Software Defects 1*

Many object-oriented design metrics have been developed [1,3,8,17,24] to help in predict software defects or evaluate design quality. Since a defect prediction model may give crucial clues about the distribution and location of defects and, thereby, test prioritization, accurate prediction can save costs in the testing process. Considerable research has been performed on defect prediction methods; see the surveys by Purao and Vaishnavi [22] and by Wahyudin et al. [25], unfortunately few results appear at statistically significant level. Therefore, further empirical validation is necessary to prove the usefulness of the metrics and software prediction models in industrial practice. Our study was made possible through the creation of a new metric calculation tool. There are many tools that calculate object-oriented metrics. What is the reason to create another one? In fact the situation is not so perfect. The available programs are either extremely inefficient (sometimes they do not work with big software projects at all), not available as open source and therefore difficult to reason about their results, or incomplete — the set of calculated metrics is not wide enough. It is extremely hard to find a tool that calculates all metrics from the Chidamber and Kemerer (C&K) metrics suite [3]. Having both, C&K and QMOOD metrics suites [1] in one tool is even rarer, and according to the authors' knowledge there is no other tool, that calculates metrics suggested by Tang et al. [24]. Ckjm calculates metrics that have been recommended as good quality indicators. There are several works that investigate the C&K metric suite and that have empirically proven their usability in quality or defect prediction [2, 10, 11, 20]. There are recommendations about QMOOD metrics suite [1, 20] and the quality oriented extension of C&K [24] too. Ckjm does not offer a GUI and its focus is not on

[1]  Mark Lorenz Object-Oriented Software Metrics , 1994 .

[2]  Elaine J. Weyuker,et al.  Comparing negative binomial and recursive partitioning models for fault prediction , 2008, PROMISE '08.

[3]  Elaine J. Weyuker,et al.  Predicting the location and number of faults in large software systems , 2005, IEEE Transactions on Software Engineering.

[4]  Robert C. Martin,et al.  OO Design Quality Metrics , 1997 .

[5]  Letha H. Etzkorn,et al.  Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes , 2007, IEEE Transactions on Software Engineering.

[6]  Jana Polgar,et al.  Object-Oriented Software Metrics , 2005, Encyclopedia of Information Science and Technology.

[7]  Carl G. Davis,et al.  A Hierarchical Model for Object-Oriented Design Quality Assessment , 2002, IEEE Trans. Software Eng..

[8]  Yuming Zhou,et al.  Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults , 2006, IEEE Transactions on Software Engineering.

[9]  Yue Jiang,et al.  Techniques for evaluating fault prediction models , 2008, Empirical Software Engineering.

[10]  Mei-Hwa Chen,et al.  An empirical study on object-oriented metrics , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[11]  Elaine J. Weyuker,et al.  Do too many cooks spoil the broth? Using the number of developers to enhance defect prediction models , 2008, Empirical Software Engineering.

[12]  Anas N. Al-Rabadi,et al.  A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions , 2004 .

[13]  Tsutomu Ishida,et al.  Metrics and Models in Software Quality Engineering , 1995 .

[14]  Chris F. Kemerer,et al.  A Metrics Suite for Object Oriented Design , 2015, IEEE Trans. Software Eng..

[15]  Norman E. Fenton,et al.  Measurement : A Necessary Scientific Basis , 2004 .

[16]  Hongfang Liu,et al.  An investigation of the effect of module size on defect prediction using static measures , 2005, PROMISE@ICSE.

[17]  Hongfang Liu,et al.  Building effective defect-prediction models in practice , 2005, IEEE Software.

[18]  Khaled El Emam,et al.  The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics , 2001, IEEE Trans. Software Eng..

[19]  Stefan Biffl,et al.  A Framework for Defect Prediction in Specific Software Project Contexts , 2008, CEE-SET.

[20]  Witold Pedrycz,et al.  Practical assessment of the models for identification of defect-prone classes in object-oriented commercial systems using design metrics , 2003, J. Syst. Softw..

[21]  Sandeep Purao,et al.  Product metrics for object-oriented systems , 2003, CSUR.

[22]  Norman E. Fenton,et al.  A Critique of Software Defect Prediction Models , 1999, IEEE Trans. Software Eng..

[23]  Brian Henderson-Sellers,et al.  Object-Oriented Metrics , 1995, TOOLS.

[24]  Banu Diri,et al.  An Artificial Immune System Approach for Fault Prediction in Object-Oriented Software , 2007, 2nd International Conference on Dependability of Computer Systems (DepCoS-RELCOMEX '07).

[25]  G. Denaro,et al.  An empirical evaluation of fault-proneness models , 2002, Proceedings of the 24th International Conference on Software Engineering. ICSE 2002.

[26]  Rainer Koschke,et al.  Revisiting the evaluation of defect prediction models , 2009, PROMISE '09.

[27]  Bart Baesens,et al.  Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings , 2008, IEEE Transactions on Software Engineering.