Unsupervised Data Augmentation for Consistency Training
暂无分享,去创建一个
Quoc V. Le | Eduard Hovy | Minh-Thang Luong | Qizhe Xie | Zihang Dai | Minh-Thang Luong | Zihang Dai | E. Hovy | Qizhe Xie
[1] Max Welling,et al. Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.
[2] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[3] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[4] Jason Weston,et al. Deep learning via semi-supervised embedding , 2008, ICML '08.
[5] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[7] Andrew Gordon Wilson,et al. There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average , 2018, ICLR.
[8] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[9] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[10] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[11] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[12] Fan Yang,et al. Good Semi-supervised Learning That Requires a Bad GAN , 2017, NIPS.
[13] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[14] Quoc V. Le,et al. AutoAugment: Learning Augmentation Policies from Data , 2018, ArXiv.
[15] Tie-Yan Liu,et al. Dual Learning for Machine Translation , 2016, NIPS.
[16] Il-Chul Moon,et al. Adversarial Dropout for Supervised and Semi-supervised Learning , 2017, AAAI.
[17] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[18] Julian Salazar. Invariant representation learning for robust deep networks , 2018 .
[19] Po-Sen Huang,et al. Are Labels Required for Improving Adversarial Robustness? , 2019, NeurIPS.
[20] Max Welling,et al. Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement , 2019, ICML.
[21] Xavier Gastaldi,et al. Shake-Shake regularization , 2017, ArXiv.
[22] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[23] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[24] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[25] Hideki Nakayama,et al. Unifying semi-supervised and robust learning by mixup , 2019 .
[26] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[27] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[28] Yoshua Bengio,et al. Interpolation Consistency Training for Semi-Supervised Learning , 2019, IJCAI.
[29] Davis Liang,et al. Learning Noise-Invariant Representations for Robust Speech Recognition , 2018, 2018 IEEE Spoken Language Technology Workshop (SLT).
[30] Ruslan Salakhutdinov,et al. Semi-Supervised QA with Generative Domain-Adaptive Nets , 2017, ACL.
[31] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[32] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[33] Rico Sennrich,et al. Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.
[34] Bo Zhang,et al. Smooth Neighbors on Teacher Graphs for Semi-Supervised Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[35] Quoc V. Le,et al. SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition , 2019, INTERSPEECH.
[36] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[37] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[38] Colin Raffel,et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms , 2018, NeurIPS.
[39] O. Chapelle,et al. Semi-Supervised Learning (Chapelle, O. et al., Eds.; 2006) [Book reviews] , 2009, IEEE Transactions on Neural Networks.
[40] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[41] Quoc V. Le,et al. Selfie: Self-supervised Pretraining for Image Embedding , 2019, ArXiv.
[42] Graham Neubig,et al. SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine Translation , 2018, EMNLP.
[43] Xiaojin Zhu,et al. Semi-Supervised Learning , 2010, Encyclopedia of Machine Learning.
[44] Takuya Akiba,et al. Shakedrop Regularization for Deep Residual Learning , 2018, IEEE Access.
[45] Jacob Jackson,et al. Semi-Supervised Learning by Label Gradient Alignment , 2019, ArXiv.
[46] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[47] Ludwig Schmidt,et al. Unlabeled Data Improves Adversarial Robustness , 2019, NeurIPS.
[48] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[49] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[50] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[51] Anton van den Hengel,et al. Image-Based Recommendations on Styles and Substitutes , 2015, SIGIR.
[52] Masashi Sugiyama,et al. Learning Discrete Representations via Information Maximizing Self-Augmented Training , 2017, ICML.
[53] Tong Zhang,et al. Deep Pyramid Convolutional Neural Networks for Text Categorization , 2017, ACL.
[54] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[55] Gholamreza Haffari,et al. Sequence to Sequence Mixture Model for Diverse Machine Translation , 2018, CoNLL.
[56] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[57] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[58] Zoubin Ghahramani,et al. Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions , 2003, ICML 2003.
[59] Erich Elsen,et al. Deep Speech: Scaling up end-to-end speech recognition , 2014, ArXiv.
[60] Ruslan Salakhutdinov,et al. Revisiting Semi-Supervised Learning with Graph Embeddings , 2016, ICML.
[61] Alexander Kolesnikov,et al. S4L: Self-Supervised Semi-Supervised Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[62] 知秀 柴田. 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .
[63] Quoc V. Le,et al. RandAugment: Practical data augmentation with no separate search , 2019, ArXiv.
[64] Shumeet Baluja,et al. Advances in Neural Information Processing , 1994 .
[65] Amir Najafi,et al. Robustness to Adversarial Perturbations in Learning from Incomplete Data , 2019, NeurIPS.
[66] Quoc V. Le,et al. Semi-Supervised Sequence Modeling with Cross-View Training , 2018, EMNLP.
[67] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[68] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[69] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[70] Ole Winther,et al. Auxiliary Deep Generative Models , 2016, ICML.
[71] Ruslan Salakhutdinov,et al. Revisiting LSTM Networks for Semi-Supervised Text Classification via Mixed Objective Function , 2019, AAAI.
[72] Shih-Fu Chang,et al. Unsupervised Embedding Learning via Invariant and Spreading Instance Feature , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[73] Peter König,et al. Data augmentation instead of explicit regularization , 2018, ArXiv.
[74] Di He,et al. Adversarially Robust Generalization Just Requires More Unlabeled Data , 2019, ArXiv.
[75] Ali Razavi,et al. Data-Efficient Image Recognition with Contrastive Predictive Coding , 2019, ICML.
[76] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[77] François Chollet,et al. Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[78] Marc'Aurelio Ranzato,et al. Mixture Models for Diverse Machine Translation: Tricks of the Trade , 2019, ICML.
[79] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[80] Myle Ott,et al. Understanding Back-Translation at Scale , 2018, EMNLP.
[81] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[82] Yann LeCun,et al. Transformation Invariance in Pattern Recognition-Tangent Distance and Tangent Propagation , 1996, Neural Networks: Tricks of the Trade.