Efficient cross validation over skewed noisy data

Cross-validation (CV), which is widely used in classification problems, gives a very good estimate of the prediction accuracy of a classifier over unseen data. Thus, any improvement in the accuracy estimation of the cross-validation method will benefit a lot of people and help in improving the results of many researches. In this paper the focus is on skewed noisy datasets. Applications such as fraud detection is an important example of skewed data. Usually for CV, simple random sampling (SRS) is performed to divide the data into the required number of folds, e.g., 10-fold CV requires the data to be divided into 10 folds. SRS is known to give poor performance (accuracy of classification) when data is skewed. We propose a new algorithm, based on the frequency histogram of each attribute value, to divide the dataset into the required number of folds. In this project, the effectiveness of the proposed algorithm vis-a-vis SRS is tested with datasets from the UCI machine learning repository. The results show that the proposed algorithm is significantly better in handling noisy skewed data.