Validating the Effectiveness of Object-Oriented Metrics over Multiple Releases for Predicting Fault Proneness

In this paper, we empirically investigate the re-lationship of existing class level object-oriented metrics with fault proneness over the multiple releases of the software. Here we first, evaluate each metric for their potential to predict faults independently by performing univariate logistic regression analysis. Next, we perform cross-correlation analysis between the significant metrics to find the subset of these metrics for an improved performance. The obtained metrics subset was then used to predict faults over the subsequent releases of the same project datasets. In this study, we used five publicly available project datasets over their multiple successive releases. Our results reported that the identified subset metrics demonstrated an improved fault prediction with higher accuracy and reduced misclassification errors.