暂无分享,去创建一个
[1] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[2] Jason Yosinski,et al. Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask , 2019, NeurIPS.
[3] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[4] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[5] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[6] Jeremy Nixon,et al. Measuring Calibration in Deep Learning , 2019, CVPR Workshops.
[7] Rahul Mehta,et al. Sparse Transfer Learning via Winning Lottery Tickets , 2019, ArXiv.
[8] Yarin Gal,et al. Uncertainty in Deep Learning , 2016 .
[9] Noel E. O'Connor,et al. Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning , 2019, 2020 International Joint Conference on Neural Networks (IJCNN).
[10] Yuandong Tian,et al. One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers , 2019, NeurIPS.
[11] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[12] Michael Carbin,et al. The Lottery Ticket Hypothesis: Training Pruned Neural Networks , 2018, ArXiv.
[13] Carl E. Rasmussen,et al. Evaluating Predictive Uncertainty Challenge , 2005, MLCW.
[14] Luke Zettlemoyer,et al. Sparse Networks from Scratch: Faster Training without Losing Performance , 2019, ArXiv.
[15] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[16] David Berthelot,et al. ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring , 2019, ArXiv.
[17] Erich Elsen,et al. Rigging the Lottery: Making All Tickets Winners , 2020, ICML.
[18] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[19] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[20] Suyog Gupta,et al. To prune, or not to prune: exploring the efficacy of pruning for model compression , 2017, ICLR.
[21] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Adam Gaier,et al. Weight Agnostic Neural Networks , 2019, NeurIPS.
[23] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[24] Guodong Zhang,et al. Picking Winning Tickets Before Training by Preserving Gradient Flow , 2020, ICLR.
[25] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[26] Shrey Desai,et al. Evaluating Lottery Tickets Under Distributional Shifts , 2019, EMNLP.
[27] Bohyung Han,et al. Learning for Single-Shot Confidence Calibration in Deep Neural Networks Through Stochastic Inferences , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Ali Farhadi,et al. What’s Hidden in a Randomly Weighted Neural Network? , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Gopinath Chennupati,et al. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks , 2019, NeurIPS.
[30] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[31] A. Raftery,et al. Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .
[32] A. Krizhevsky. Convolutional Deep Belief Networks on CIFAR-10 , 2010 .
[33] Eunho Yang,et al. Adaptive Network Sparsification with Dependent Variational Beta-Bernoulli Dropout , 2018, 1805.10896.
[34] Stephen E. Fienberg,et al. The Comparison and Evaluation of Forecasters. , 1983 .