暂无分享,去创建一个
Cristian Sminchisescu | Thomas Mensink | Thalaiyasingam Ajanthan | Richard Hartley | Kartik Gupta | Amir Rahimi
[1] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[2] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[3] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Tengyu Ma,et al. Verified Uncertainty Calibration , 2019, NeurIPS.
[5] Jacob Roll,et al. Evaluating model calibration in classification , 2019, AISTATS.
[6] Philip H.S. Torr,et al. Calibrating Deep Neural Networks using Focal Loss , 2020, NeurIPS.
[7] Seong Joon Oh,et al. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[8] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[9] Peter A. Flach,et al. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers , 2017, AISTATS.
[10] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[11] Sunita Sarawagi,et al. Trainable Calibration Measures For Neural Networks From Kernel Mean Embeddings , 2018, ICML.
[12] Geoffrey E. Hinton,et al. When Does Label Smoothing Help? , 2019, NeurIPS.
[13] AN Kolmogorov-Smirnov,et al. Sulla determinazione empírica di uma legge di distribuzione , 1933 .
[14] Bianca Zadrozny,et al. Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers , 2001, ICML.
[15] John Platt,et al. Probabilistic Outputs for Support vector Machines and Comparisons to Regularized Likelihood Methods , 1999 .
[16] Rich Caruana,et al. Predicting good probabilities with supervised learning , 2005, ICML.
[17] Milos Hauskrecht,et al. Obtaining Well Calibrated Probabilities Using Bayesian Binning , 2015, AAAI.
[18] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[19] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[20] Gopinath Chennupati,et al. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks , 2019, NeurIPS.
[21] Peter A. Flach,et al. Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration , 2019, NeurIPS.
[22] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Jeremy Nixon,et al. Measuring Calibration in Deep Learning , 2019, CVPR Workshops.
[24] G. Brier. VERIFICATION OF FORECASTS EXPRESSED IN TERMS OF PROBABILITY , 1950 .
[25] 김정민,et al. Cubic Spline Interpolation을 이용한 얼굴 영상의 단순화 , 2010 .
[26] Bohyung Han,et al. Learning for Single-Shot Confidence Calibration in Deep Neural Networks Through Stochastic Inferences , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Bianca Zadrozny,et al. Transforming classifier scores into accurate multiclass probability estimates , 2002, KDD.
[28] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[29] Fredrik Lindsten,et al. Calibration tests in multi-class classification: A unifying framework , 2019, NeurIPS.
[30] B. Kvasov. Cubic Spline Interpolation , 2000 .
[31] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[32] Geoffrey E. Hinton,et al. Regularizing Neural Networks by Penalizing Confident Output Distributions , 2017, ICLR.