Evaluating model calibration in classification
暂无分享,去创建一个
Jacob Roll | Fredrik Lindsten | Thomas B. Schön | Carl R. Andersson | Juozas Vaicenavicius | David Widmann | Thomas Bo Schön | F. Lindsten | Juozas Vaicenavicius | David Widmann | Jacob Roll
[1] Jochen Bröcker. Some Remarks on the Reliability of Categorical Probability Forecasts , 2008 .
[2] Sunita Sarawagi,et al. Trainable Calibration Measures For Neural Networks From Kernel Mean Embeddings , 2018, ICML.
[3] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[5] Leonard A. Smith,et al. Increasing the Reliability of Reliability Diagrams , 2007 .
[6] J. Brocker. Reliability, Sufficiency, and the Decomposition of Proper Scores , 2008, 0806.0813.
[7] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[8] A. H. Murphy,et al. “Good” Probability Assessors , 1968 .
[9] Mark Steyvers,et al. Choosing a Strictly Proper Scoring Rule , 2013, Decis. Anal..
[10] A. Nobel. Histogram regression estimation using data-dependent partitions , 1996 .
[11] A. H. Murphy,et al. A General Framework for Forecast Verification , 1987 .
[12] Rich Caruana,et al. Predicting good probabilities with supervised learning , 2005, ICML.
[13] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[14] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.