Calibrating Classifier Scores into Probabilities

This paper provides an overview of calibration methods for supervised classification learners. Calibration means a scaling of classifier scores into the probability space. Such a probabilistic classifier output is especially useful if the classification output is used for post-processing. The calibraters are compared by using 10-fold cross-validation according to their performance on SVM and CART outputs for four different two-class data sets.