暂无分享,去创建一个
David Berthelot | Colin Raffel | Ekin D. Cubuk | Han Zhang | Nicholas Carlini | Zizhao Zhang | Kihyuk Sohn | Chun-Liang Li | Alex Kurakin
[1] Quoc V. Le,et al. Self-Training With Noisy Student Improves ImageNet Classification , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Alexander Kolesnikov,et al. S4L: Self-Supervised Semi-Supervised Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[3] Jinghui Chen,et al. Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks , 2018, IJCAI.
[4] Eugene Charniak,et al. Effective Self-Training for Parsing , 2006, NAACL.
[5] Nathan Srebro,et al. The Marginal Value of Adaptive Gradient Methods in Machine Learning , 2017, NIPS.
[6] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[7] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[8] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[9] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[10] Quoc V. Le,et al. Semi-supervised Learning by Coaching , 2019 .
[11] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[12] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[13] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[14] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[15] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[16] Alexander Zien,et al. Semi-Supervised Learning , 2006 .
[17] G. McLachlan. Iterative Reclassification Procedure for Constructing An Asymptotically Optimal Rule of Allocation in Discriminant-Analysis , 1975 .
[18] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[19] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[20] Xiaojin Zhu,et al. --1 CONTENTS , 2006 .
[21] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[22] Yang Yang,et al. Deep Learning Scaling is Predictable, Empirically , 2017, ArXiv.
[23] Quoc V. Le,et al. AutoAugment: Learning Augmentation Strategies From Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[25] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[26] Yang Zou,et al. Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training , 2018, ArXiv.
[27] Kaiming He,et al. Exploring the Limits of Weakly Supervised Pretraining , 2018, ECCV.
[28] Rico Sennrich,et al. Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.
[29] Alexandre Lacoste,et al. TADAM: Task dependent adaptive metric for improved few-shot learning , 2018, NeurIPS.
[30] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[31] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[32] Tolga Tasdizen,et al. Mutual exclusivity loss for semi-supervised deep learning , 2016, 2016 IEEE International Conference on Image Processing (ICIP).
[33] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[34] Guodong Zhang,et al. Three Mechanisms of Weight Decay Regularization , 2018, ICLR.
[35] Martial Hebert,et al. Semi-Supervised Self-Training of Object Detection Models , 2005, 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05) - Volume 1.
[36] Colin Raffel,et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms , 2018, NeurIPS.
[37] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[38] Xiaojin Zhu,et al. Introduction to Semi-Supervised Learning , 2009, Synthesis Lectures on Artificial Intelligence and Machine Learning.
[39] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[40] Úlfar Erlingsson,et al. Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications , 2019, ArXiv.
[41] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[42] Balaji Lakshminarayanan,et al. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty , 2020, ICLR.
[43] Jaehoon Lee,et al. On Empirical Comparisons of Optimizers for Deep Learning , 2019, ArXiv.
[44] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[45] Noel E. O'Connor,et al. Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning , 2019, 2020 International Joint Conference on Neural Networks (IJCNN).
[46] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[47] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[48] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[49] Bernhard Schölkopf,et al. Introduction to Semi-Supervised Learning , 2006, Semi-Supervised Learning.
[50] Yoshua Bengio,et al. Interpolation Consistency Training for Semi-Supervised Learning , 2019, IJCAI.
[51] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[52] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[53] Honglak Lee,et al. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.
[54] Geoffrey French,et al. Self-ensembling for visual domain adaptation , 2017, ICLR.
[55] Fan Yang,et al. Good Semi-supervised Learning That Requires a Bad GAN , 2017, NIPS.
[56] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[57] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[58] Quoc V. Le,et al. SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition , 2019, INTERSPEECH.
[59] David Berthelot,et al. ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring , 2020, ICLR.
[60] Yonghui Wu,et al. Exploring the Limits of Language Modeling , 2016, ArXiv.
[61] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[62] H. J. Scudder,et al. Probability of error of some adaptive pattern-recognition machines , 1965, IEEE Trans. Inf. Theory.
[63] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.