暂无分享,去创建一个
[1] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[2] Jiajun Zhang,et al. Exploiting Source-side Monolingual Data in Neural Machine Translation , 2016, EMNLP.
[3] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[4] Quoc V. Le,et al. Unsupervised Data Augmentation , 2019, ArXiv.
[5] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[6] Rico Sennrich,et al. Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.
[7] Alexander Zien,et al. Semi-Supervised Classification by Low Density Separation , 2005, AISTATS.
[8] Myle Ott,et al. Understanding Back-Translation at Scale , 2018, EMNLP.
[9] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[10] H. J. Scudder,et al. Probability of error of some adaptive pattern-recognition machines , 1965, IEEE Trans. Inf. Theory.
[11] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[12] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[13] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[14] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[15] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[16] David Yarowsky,et al. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods , 1995, ACL.
[17] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[18] Eugene Charniak,et al. Effective Self-Training for Parsing , 2006, NAACL.
[19] Zhi-Hua Zhou,et al. Tri-training: exploiting unlabeled data using three classifiers , 2005, IEEE Transactions on Knowledge and Data Engineering.
[20] Guillaume Lample,et al. Phrase-Based & Neural Unsupervised Machine Translation , 2018, EMNLP.
[21] Quoc V. Le,et al. Semi-Supervised Sequence Modeling with Cross-View Training , 2018, EMNLP.
[22] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[23] Nicola Ueffing,et al. Using monolingual source-language data to improve MT performance , 2006, IWSLT.
[24] Philipp Koehn,et al. Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English , 2019, ArXiv.
[25] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[26] Ari Rappoport,et al. Self-Training for Enhancement and Domain Adaptation of Statistical Parsers Trained on Small Datasets , 2007, ACL.
[27] Xiaojin Zhu,et al. Introduction to Semi-Supervised Learning , 2009, Synthesis Lectures on Artificial Intelligence and Machine Learning.
[28] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[29] Yan Zhou,et al. Democratic co-learning , 2004, 16th IEEE International Conference on Tools with Artificial Intelligence.
[30] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[31] Mary P. Harper,et al. Self-Training PCFG Grammars with Latent Annotations Across Languages , 2009, EMNLP.
[32] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[33] O. Chapelle,et al. Semi-Supervised Learning (Chapelle, O. et al., Eds.; 2006) [Book reviews] , 2009, IEEE Transactions on Neural Networks.
[34] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[35] Graham Neubig,et al. StructVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing , 2018, ACL.
[36] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[37] Phil Blunsom,et al. Language as a Latent Variable: Discrete Generative Models for Sentence Compression , 2016, EMNLP.