Distribution-free calibration guarantees for histogram binning without sample splitting

We prove calibration guarantees for the popular histogram binning (also called uniform-mass binning) method of Zadrozny and Elkan (2001). Histogram binning has displayed strong practical performance, but theoretical guarantees have only been shown for sample split versions that avoid ‘double dipping’ the data. We demonstrate that the statistical cost of sample splitting is practically significant on a credit default dataset. We then prove calibration guarantees for the original method that double dips the data, using a certain Markov property of order statistics. Based on our results, we make practical recommendations for choosing the number of bins in histogram binning. In our illustrative simulations, we propose a new tool for assessing calibration— validity plots—which provide more information than an ECE estimate.

[1]  Tengyu Ma,et al.  Verified Uncertainty Calibration , 2019, NeurIPS.

[2]  Garvesh Raskutti,et al.  The bias of isotonic regression. , 2019, Electronic journal of statistics.

[3]  Fredrik Lindsten,et al.  Calibration tests in multi-class classification: A unifying framework , 2019, NeurIPS.

[4]  Nicholas Cain,et al.  Mitigating bias in calibration error estimation , 2020, ArXiv.

[5]  Bianca Zadrozny,et al.  Transforming classifier scores into accurate multiclass probability estimates , 2002, KDD.

[6]  K. R. Parthasarathy,et al.  SOME LIMIT THEOREMS IN REGRESSION THEORY , 2016 .

[7]  Bianca Zadrozny,et al.  Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers , 2001, ICML.

[8]  I-Cheng Yeh,et al.  The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients , 2009, Expert Syst. Appl..

[9]  G. Lugosi,et al.  Consistency of Data-driven Histogram Methods for Density Estimation and Classification , 1996 .

[10]  Kilian Q. Weinberger,et al.  On Calibration of Modern Neural Networks , 2017, ICML.

[11]  A. Raftery,et al.  Probabilistic forecasts, calibration and sharpness , 2007 .

[12]  A. Dawid The Well-Calibrated Bayesian , 1982 .

[13]  Jochen Bröcker,et al.  Estimating reliability and resolution of probability forecasts through decomposition of the empirical score , 2012, Climate Dynamics.

[14]  E. S. Pearson,et al.  THE USE OF CONFIDENCE OR FIDUCIAL LIMITS ILLUSTRATED IN THE CASE OF THE BINOMIAL , 1934 .

[15]  Peter A. Flach,et al.  Beyond sigmoids: How to obtain well-calibrated probabilities from binary classifiers with beta calibration , 2017 .

[16]  Charles Blundell,et al.  Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.

[17]  Robert G. Miller Statistical prediction by discriminant analysis , 1962 .

[18]  F. Sanders On Subjective Probability Forecasting , 1963 .

[19]  D. Farnsworth A First Course in Order Statistics , 1993 .

[20]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[21]  Pedro M. Domingos,et al.  Tree Induction for Probability-Based Ranking , 2003, Machine Learning.

[22]  Guy N. Rothblum,et al.  Multicalibration: Calibration for the (Computationally-Identifiable) Masses , 2018, ICML.

[23]  Rich Caruana,et al.  Predicting good probabilities with supervised learning , 2005, ICML.

[24]  Aaditya Ramdas,et al.  Distribution-free binary classification: prediction sets, confidence intervals and calibration , 2020, NeurIPS.

[25]  Milos Hauskrecht,et al.  Obtaining Well Calibrated Probabilities Using Bayesian Binning , 2015, AAAI.

[26]  Mohammad Ahsanullah,et al.  An Introduction to Order Statistics , 2013 .