暂无分享,去创建一个
Jiwei Li | Yuxian Meng | Fei Wu | Xiaofei Sun | Zijun Sun | Chun Fan | Jiwei Li | Fei Wu | Yuxian Meng | Chun Fan | Xiaofei Sun | Zijun Sun
[1] Hiroshi Inoue,et al. Data Augmentation by Pairing Samples for Images Classification , 2018, ArXiv.
[2] Rico Sennrich,et al. Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.
[3] Claire Cardie,et al. Finding Deceptive Opinion Spam by Any Stretch of the Imagination , 2011, ACL.
[4] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[5] Jiajun Shen,et al. Revisiting Self-Training for Neural Sequence Generation , 2020, ICLR.
[6] Ellen Riloff,et al. Learning Extraction Patterns for Subjective Expressions , 2003, EMNLP.
[7] Wei Wu,et al. Description Based Text Classification with Reinforcement Learning , 2020, ICML.
[8] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[9] Yang Liu,et al. Neural Machine Translation with Reconstruction , 2016, AAAI.
[10] Quoc V. Le,et al. Improved Noisy Student Training for Automatic Speech Recognition , 2020, INTERSPEECH.
[11] Nanning Zheng,et al. Transductive Semi-Supervised Deep Learning Using Min-Max Features , 2018, ECCV.
[12] Quoc V. Le,et al. Self-Training With Noisy Student Improves ImageNet Classification , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Xiaojin Zhu,et al. Semi-Supervised Learning , 2010, Encyclopedia of Machine Learning.
[14] Dumitru Erhan,et al. Training Deep Neural Networks on Noisy Labels with Bootstrapping , 2014, ICLR.
[15] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[16] Dongyan Zhao,et al. Semi-supervised Text Style Transfer: Cross Projection in Latent Space , 2019, EMNLP.
[17] Alexander Zien,et al. Semi-Supervised Learning , 2006 .
[18] Sree Hari Krishnan Parthasarathi,et al. Lessons from Building Acoustic Models with a Million Hours of Speech , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[19] Daniel Jurafsky,et al. Data Noising as Smoothing in Neural Network Language Models , 2017, ICLR.
[20] Diyi Yang,et al. That’s So Annoying!!!: A Lexical and Frame-Semantic Embedding Based Data Augmentation Approach to Automatic Categorization of Annoying Behaviors using #petpeeve Tweets , 2015, EMNLP.
[21] Yannis Avrithis,et al. Label Propagation for Deep Semi-Supervised Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Yoshua Bengio,et al. Interpolation Consistency Training for Semi-Supervised Learning , 2019, IJCAI.
[23] Bo Wang,et al. Deep Co-Training for Semi-Supervised Image Recognition , 2018, ECCV.
[24] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[25] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[26] Hongyu Guo,et al. Augmenting Data with Mixup for Sentence Classification: An Empirical Study , 2019, ArXiv.
[27] Doug Downey,et al. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks , 2020, ACL.
[28] Richard Socher,et al. XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering , 2019, ArXiv.
[29] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[30] Chandra Bhagavatula,et al. Semi-supervised sequence tagging with bidirectional language models , 2017, ACL.
[31] H. J. Scudder,et al. Probability of error of some adaptive pattern-recognition machines , 1965, IEEE Trans. Inf. Theory.
[32] Xingrui Yu,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[33] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[34] Quoc V. Le,et al. Rethinking Pre-training and Self-training , 2020, NeurIPS.
[35] Kan Chen,et al. Billion-scale semi-supervised learning for image classification , 2019, ArXiv.
[36] Ateret Anaby-Tavor,et al. Do Not Have Enough Data? Deep Learning to the Rescue! , 2019, AAAI.
[37] Graham Neubig,et al. Generalized Data Augmentation for Low-Resource Translation , 2019, ACL.
[38] Quoc V. Le,et al. AutoAugment: Learning Augmentation Policies from Data , 2018, ArXiv.
[39] Tom M. Mitchell,et al. Semi-Supervised Text Classification Using EM , 2006, Semi-Supervised Learning.
[40] Katharina Kann,et al. Training Data Augmentation for Low-Resource Morphological Inflection , 2017, CoNLL.
[41] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[42] Jianfeng Gao,et al. UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training , 2020, ICML.
[43] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[44] Andrew Gordon Wilson,et al. There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average , 2018, ICLR.
[45] Fan Yang,et al. Good Semi-supervised Learning That Requires a Bad GAN , 2017, NIPS.
[46] Claire Cardie,et al. Towards a General Rule for Identifying Deceptive Opinion Spam , 2014, ACL.
[47] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[48] Quoc V. Le,et al. Unsupervised Pretraining for Sequence to Sequence Learning , 2016, EMNLP.
[49] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[50] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[51] Artsiom Sanakoyeu,et al. Semi-Supervised Segmentation of Salt Bodies in Seismic Images using an Ensemble of Convolutional Neural Networks , 2019, GCPR.
[52] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[53] Xiaojun Wan,et al. A Semi-Supervised Approach for Low-Resourced Text Generation , 2019, ArXiv.
[54] Shaogang Gong,et al. Semi-supervised Deep Learning with Memory , 2018, ECCV.
[55] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[56] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[57] Xiaocheng Feng,et al. Effective LSTMs for Target-Dependent Sentiment Classification , 2015, COLING.
[58] Cordelia Schmid,et al. Transformation Pursuit for Image Classification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[59] Daniel Jurafsky,et al. Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction , 2018, NAACL.
[60] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[61] Raheel Qader,et al. Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models , 2019, INLG.
[62] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[63] Bo Zhang,et al. Smooth Neighbors on Teacher Graphs for Semi-Supervised Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[64] Sosuke Kobayashi,et al. Contextual Augmentation: Data Augmentation by Words with Paradigmatic Relations , 2018, NAACL.
[65] Eunah Cho,et al. Data Augmentation using Pre-trained Transformer Models , 2020, LIFELONGNLP.
[66] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[67] Xing Wu,et al. Conditional BERT Contextual Augmentation , 2018, ICCS.
[68] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[69] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[70] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[71] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[72] Sriharsha Veeramachaneni,et al. A Simple Semi-supervised Algorithm For Named Entity Recognition , 2009, HLT-NAACL 2009.
[73] Christof Monz,et al. Data Augmentation for Low-Resource Neural Machine Translation , 2017, ACL.
[74] Myle Ott,et al. Understanding Back-Translation at Scale , 2018, EMNLP.
[75] Tie-Yan Liu,et al. Soft Contextual Data Augmentation for Neural Machine Translation , 2019, ACL.
[76] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[77] Luis Gravano,et al. Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training , 2019, EMNLP.
[78] Bernt Schiele,et al. Learning to Self-Train for Semi-Supervised Few-Shot Classification , 2019, NeurIPS.
[79] Omer Levy,et al. Neural Word Embedding as Implicit Matrix Factorization , 2014, NIPS.
[80] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[81] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[82] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[83] Mark Steedman,et al. Data Augmentation via Dependency Tree Morphing for Low-Resource Languages , 2018, EMNLP.
[84] Geoffrey E. Hinton,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[85] Kai Zou,et al. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks , 2019, EMNLP.
[86] Lu Liu,et al. Decoupled Certainty-Driven Consistency Loss for Semi-supervised Learning , 2019 .
[87] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[88] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[89] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[90] Maosong Sun,et al. Semi-Supervised Learning for Neural Machine Translation , 2016, ACL.
[91] Quoc V. Le,et al. Semi-Supervised Sequence Modeling with Cross-View Training , 2018, EMNLP.
[92] Xiaojin Zhu,et al. --1 CONTENTS , 2006 .
[93] Sam Shleifer. Low Resource Text Classification with ULMFit and Backtranslation , 2019, ArXiv.