Software defect prediction using a cost sensitive decision forest and voting, and a potential solution to the class imbalance problem

Software development projects inevitably accumulate defects throughout the development process. Due to the high cost that defects can incur, careful consideration is crucial when predicting which sections of code are likely to contain defects. Classification algorithms used in machine learning can be used to create classifiers which can be used to predict defects. While traditional classification algorithms optimize for accuracy, cost-sensitive classification methods attempt to make predictions which incur the lowest classification cost. In this paper we propose a cost-sensitive classification technique called CSForest which is an ensemble of decision trees. We also propose a cost-sensitive voting technique called CSVoting in order to take advantage of the set of decision trees in minimizing the classification cost. We then investigate a potential solution to class imbalance within our decision forest algorithm. We empirically evaluate the proposed techniques comparing them with six (6) classifier algorithms on six (6) publicly available clean datasets that are commonly used in the research on software defect prediction. Our initial experimental results indicate a clear superiority of the proposed techniques over the existing ones. Author-HighlightsSDP is short for Software Defect Prediction.We show that there is not a clear winner in the studied existing methods for SDP*.A cost-sensitive decision forest and voting technique are proposed.The superiority of the proposed techniques is shown.A proposed framework for the forest algorithm for handling class imbalance.

[1]  Md Zahidul Islam,et al.  Knowledge Discovery through SysFor - a Systematically Developed Forest of Multiple Decision Trees , 2011, AusDM.

[2]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[3]  Bin Gu,et al.  Cost-sensitive learning for defect escalation , 2014, Knowl. Based Syst..

[4]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[5]  Bruce Christianson,et al.  Reflections on the NASA MDP data sets , 2012, IET Softw..

[6]  Kai Ming Ting,et al.  An Instance-weighting Method to Induce Cost-sensitive Trees , 2001 .

[7]  Qinbao Song,et al.  Data Quality: Some Comments on the NASA Software Defect Datasets , 2013, IEEE Transactions on Software Engineering.

[8]  Chris F. Kemerer,et al.  Cyclomatic Complexity Density and Software Maintenance Productivity , 1991, IEEE Trans. Software Eng..

[9]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[10]  Bruce Christianson,et al.  The misuse of the NASA metrics data program data sets for automated software defect prediction , 2011, EASE.

[11]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[12]  Maurice H. Halstead,et al.  Elements of software science , 1977 .

[13]  Maurice H. Halstead,et al.  Elements of software science (Operating and programming systems series) , 1977 .

[14]  John C. Platt,et al.  Fast training of support vector machines using sequential minimal optimization, advances in kernel methods , 1999 .

[15]  Pedro M. Domingos MetaCost: a general method for making classifiers cost-sensitive , 1999, KDD '99.

[16]  Nitesh V. Chawla,et al.  SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..

[17]  Anas N. Al-Rabadi,et al.  A comparison of modified reconstructability analysis and Ashenhurst‐Curtis decomposition of Boolean functions , 2004 .

[18]  Richard Y. Wang,et al.  Data Quality , 2000, Advances in Database Systems.

[19]  Victor S. Sheng,et al.  Maximum profit mining and its application in software development , 2006, KDD '06.

[20]  S. Sathiya Keerthi,et al.  Improvements to Platt's SMO Algorithm for SVM Classifier Design , 2001, Neural Computation.

[21]  Chumphol Bunkhumpornpat,et al.  Safe-Level-SMOTE: Safe-Level-Synthetic Minority Over-Sampling TEchnique for Handling the Class Imbalanced Problem , 2009, PAKDD.

[22]  Ian H. Witten,et al.  WEKA: a machine learning workbench , 1994, Proceedings of ANZIIS '94 - Australian New Zealnd Intelligent Information Systems Conference.

[23]  Victor S. Sheng,et al.  Thresholding for Making Classifiers Cost-sensitive , 2006, AAAI.

[24]  Md Zahidul Islam,et al.  Cost Sensitive Decision Forest and Voting for Software Defect Prediction , 2014, PRICAI.