Fault Prediction Modeling using Object-Oriented Metrics: An Empirical Study

Software testing is an area where software products are examined through a series of verification and validation processes respectively. This phase of software development carries out the process of detection and removal of software faults. But this detection and removal of faults together consume up to 60% of project budget (Beizer, 1990). Applying equal testing and verification efforts to all parts of a software system becomes cost-prohibited. Software Fault Proneness is a key factor for monitoring and controlling the quality of software. By comparing the distribution of faults (Fault Proneness) and the amount of faults found with testing (Software faultiness), the effectiveness of analysis and testing can be judged easily. Detecting a fault prone code early, within software life-cycle phase, allows for the code to be fixed at minimum costs; thus a good fault prediction helps to lower down the development and maintenance costs. In this project, software quality estimation, using various classifiers is performed where some metrics are the inputs and its quality attributes bring the output. Empirical study of these classifiers then judges the quality of the software being developed.

[1]  D. Kibler,et al.  Instance-based learning algorithms , 2004, Machine Learning.

[2]  Bojan Cukic,et al.  Robust prediction of fault-proneness by random forests , 2004, 15th International Symposium on Software Reliability Engineering.

[3]  Mei-Hwa Chen,et al.  An empirical study on object-oriented metrics , 1999, Proceedings Sixth International Software Metrics Symposium (Cat. No.PR00403).

[4]  Khaled El Emam,et al.  The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics , 2001, IEEE Trans. Software Eng..

[5]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[6]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[7]  Judea Pearl,et al.  Bayesian Networks , 1998, Encyclopedia of Social Network Analysis and Mining. 2nd Ed..

[8]  Brian Henderson-Sellers,et al.  Object-Oriented Metrics , 1995, TOOLS.

[9]  Edward B. Allen,et al.  Case-Based Software Quality Prediction , 2000, Int. J. Softw. Eng. Knowl. Eng..

[10]  Lionel C. Briand,et al.  A Unified Framework for Coupling Measurement in Object-Oriented Systems , 1999, IEEE Trans. Software Eng..

[11]  Jean-Jacques Gras,et al.  Improving fault prediction using Bayesian networks for the development of embedded software applications , 2006, Softw. Test. Verification Reliab..

[12]  Kaizhu Huang Discriminative Naive Bayesian Classifiers , .

[13]  Lionel C. Briand,et al.  Exploring the relationships between design measures and software quality in object-oriented systems , 2000, J. Syst. Softw..

[14]  Lionel C. Briand,et al.  Assessing the Applicability of Fault-Proneness Models Across Object-Oriented Software Projects , 2002, IEEE Trans. Software Eng..

[15]  Michael Winikoff,et al.  Verifying Requirements Through Mathematical Modelling and Animation , 2000, Int. J. Softw. Eng. Knowl. Eng..

[16]  Javam C. Machado,et al.  The prediction of faulty classes using object-oriented design metrics , 2001, J. Syst. Softw..

[17]  Oral Alan,et al.  Thresholds based outlier detection approach for mining class outliers: An empirical case study on software measurement datasets , 2011, Expert Syst. Appl..

[18]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[19]  David W. Hosmer,et al.  Applied Logistic Regression , 1991 .