Detecting Covariate Shift with Black Box Predictors
暂无分享,去创建一个
Pablo Piantanida | Florence Alberge | Pierre Duhamel | Clément Feutry | P. Duhamel | P. Piantanida | Clément Feutry | F. Alberge
[1] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[2] R. Srikant,et al. Principled Detection of Out-of-Distribution Examples in Neural Networks , 2017, ArXiv.
[3] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[4] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[5] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[6] Maya R. Gupta,et al. To Trust Or Not To Trust A Classifier , 2018, NeurIPS.
[7] Motoaki Kawanabe,et al. Machine Learning in Non-Stationary Environments - Introduction to Covariate Shift Adaptation , 2012, Adaptive computation and machine learning.
[8] E. Lehmann. Testing Statistical Hypotheses , 1960 .
[9] Sreeram Kannan,et al. Communication Algorithms via Deep Learning , 2018, ICLR.
[10] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[11] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[12] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[13] H. Shimodaira,et al. Improving predictive inference under covariate shift by weighting the log-likelihood function , 2000 .