Calibrating Classifier Scores into Probabilities
暂无分享,去创建一个
This paper provides an overview of calibration methods for supervised classification learners. Calibration means a scaling of classifier scores into the probability space. Such a probabilistic classifier output is especially useful if the classification output is used for post-processing. The calibraters are compared by using 10-fold cross-validation according to their performance on SVM and CART outputs for four different two-class data sets.
[1] U. Garczarek. Classification rules in standardized partition spaces , 2002 .
[2] Bianca Zadrozny,et al. Transforming classifier scores into accurate multiclass probability estimates , 2002, KDD.
[3] John Platt,et al. Probabilistic Outputs for Support vector Machines and Comparisons to Regularized Likelihood Methods , 1999 .
[4] Paul N. Bennett. Using asymmetric distributions to improve text classifier probability estimates , 2003, SIGIR.