暂无分享,去创建一个
Sebastian Nowozin | Stephan Mandt | Jasper Snoek | Tim Salimans | Joshua V. Dillon | Bastiaan S. Veeling | Rodolphe Jenatton | Jakub Swiatkowski | Kevin Roth | Linh Tran
[1] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[2] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[3] Daniel Hernández-Lobato,et al. Deep Gaussian Processes for Regression using Approximate Expectation Propagation , 2016, ICML.
[4] Geoffrey E. Hinton,et al. Large scale distributed neural network training through online distillation , 2018, ICLR.
[5] Thomas G. Dietterich. Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.
[6] Carl E. Rasmussen,et al. Evaluating Predictive Uncertainty Challenge , 2005, MLCW.
[7] A. Asuncion,et al. UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences , 2007 .
[8] Christoph H. Lampert,et al. Towards Understanding Knowledge Distillation , 2019, ICML.
[9] Brian Kingsbury,et al. Very deep multilingual convolutional neural networks for LVCSR , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[10] Huchuan Lu,et al. Deep Mutual Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[11] Masashi Sugiyama,et al. Bayesian Dark Knowledge , 2015 .
[12] A. Raftery,et al. Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .
[13] Guocong Song,et al. Collaborative Learning for Deep Neural Networks , 2018, NeurIPS.
[14] Vivek Rathod,et al. Bayesian dark knowledge , 2015, NIPS.
[15] Benjamin Van Roy,et al. Deep Exploration via Bootstrapped DQN , 2016, NIPS.
[16] Xu Lan,et al. Knowledge Distillation by On-the-Fly Native Ensemble , 2018, NeurIPS.
[17] Finale Doshi-Velez,et al. Decomposition of Uncertainty for Active Learning and Reliable Reinforcement Learning in Stochastic Systems , 2017, ArXiv.
[18] Bernhard Schölkopf,et al. Unifying distillation and privileged information , 2015, ICLR.
[19] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[20] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[21] Sebastian Nowozin,et al. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.
[22] Andrey Malinin,et al. Ensemble Distribution Distillation , 2019, ICLR.
[23] Mark J. F. Gales,et al. Predictive Uncertainty Estimation via Prior Networks , 2018, NeurIPS.
[24] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).