The Integrated Calibration Index (ICI) and related metrics for quantifying the calibration of logistic regression models

Assessing the calibration of methods for estimating the probability of the occurrence of a binary outcome is an important aspect of validating the performance of risk‐prediction algorithms. Calibration commonly refers to the agreement between predicted and observed probabilities of the outcome. Graphical methods are an attractive approach to assess calibration, in which observed and predicted probabilities are compared using loess‐based smoothing functions. We describe the Integrated Calibration Index (ICI) that is motivated by Harrell's Emax index, which is the maximum absolute difference between a smooth calibration curve and the diagonal line of perfect calibration. The ICI can be interpreted as weighted difference between observed and predicted probabilities, in which observations are weighted by the empirical density function of the predicted probabilities. As such, the ICI is a measure of calibration that explicitly incorporates the distribution of predicted probabilities. We also discuss two related measures of calibration, E50 and E90, which represent the median and 90th percentile of the absolute difference between observed and predicted probabilities. We illustrate the utility of the ICI, E50, and E90 by using them to compare the calibration of logistic regression with that of random forests and boosted regression trees for predicting mortality in patients hospitalized with a heart attack. The use of these numeric metrics permitted for a greater differentiation in calibration than was permissible by visual inspection of graphical calibration curves.

[1]  Yvonne Vergouwe,et al.  A calibration hierarchy for risk models was defined: from utopia to empirical data. , 2016, Journal of clinical epidemiology.

[2]  Ewout W Steyerberg,et al.  Net benefit approaches to the evaluation of prediction models, molecular markers, and diagnostic tests , 2016, British Medical Journal.

[3]  B. van Calster,et al.  Calibration of Risk Prediction Models , 2015, Medical decision making : an international journal of the Society for Medical Decision Making.

[4]  Ewout W. Steyerberg,et al.  F1000Prime recommendation of Calibration of risk prediction models: impact on decision-analytic performance. , 2014 .

[5]  Ewout W. Steyerberg,et al.  Graphical assessment of internal and external calibration of logistic regression models by using loess smoothers , 2013, Statistics in medicine.

[6]  Jarrod E Dalton,et al.  Flexible recalibration of binary clinical prediction models , 2013, Statistics in medicine.

[7]  Ewout W Steyerberg,et al.  Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods? , 2012, Biometrical journal. Biometrische Zeitschrift.

[8]  Ewout W. Steyerberg,et al.  Focus on : Contemporary Methods in Biostatistics ( I ) Regression Modeling Strategies , 2017 .

[9]  N. Obuchowski,et al.  Assessing the Performance of Prediction Models: A Framework for Traditional and Novel Measures , 2010, Epidemiology.

[10]  G. Bedogni,et al.  Clinical Prediction Models—a Practical Approach to Development, Validation and Updating , 2009 .

[11]  E. Steyerberg Clinical Prediction Models , 2008, Statistics for Biology and Health.

[12]  Peter Buhlmann,et al.  BOOSTING ALGORITHMS: REGULARIZATION, PREDICTION AND MODEL FITTING , 2007, 0804.2752.

[13]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[14]  Joseph D. Conklin Applied Logistic Regression , 2002, Technometrics.

[15]  Frank E. Harrell,et al.  Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis , 2001 .

[16]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[17]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[18]  D. Cox Two further applications of a model for binary regression , 1958 .

[19]  O. Linton Local Regression Models , 2010 .

[20]  L. Breiman Random Forests , 2001, Machine Learning.