暂无分享,去创建一个
Subhabrata Mukherjee | Ming Wu | Ahmed Hassan Awadallah | Jing Gao | Yuancheng Tu | Yaqing Wang | Haoda Chu
[1] Rico Sennrich,et al. Improving Neural Machine Translation Models with Monolingual Data , 2015, ACL.
[2] Andrew McCallum,et al. Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks , 2020, COLING.
[3] Slav Petrov,et al. Overview of the 2012 Shared Task on Parsing the Web , 2012 .
[4] Erik F. Tjong Kim Sang,et al. Representing Text Chunks , 1999, EACL.
[5] Shankar Vembu,et al. Using error decay prediction to overcome practical issues of deep active learning for named entity recognition , 2020, Machine Learning.
[6] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[7] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[8] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[9] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[10] Kevin Gimpel,et al. Variational Sequential Labelers for Semi-Supervised Learning , 2019, EMNLP.
[11] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[12] Andrew McCallum,et al. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples , 2017, NIPS.
[13] Phil Blunsom,et al. Language as a Latent Variable: Discrete Generative Models for Sentence Compression , 2016, EMNLP.
[14] Daphne Koller,et al. Self-Paced Learning for Latent Variable Models , 2010, NIPS.
[15] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[16] Bin Yang,et al. Learning to Reweight Examples for Robust Deep Learning , 2018, ICML.
[17] Chandra Bhagavatula,et al. Semi-supervised sequence tagging with bidirectional language models , 2017, ACL.
[18] H. J. Scudder,et al. Probability of error of some adaptive pattern-recognition machines , 1965, IEEE Trans. Inf. Theory.
[19] Quoc V. Le,et al. Self-Training With Noisy Student Improves ImageNet Classification , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Xiaojin Zhu,et al. Semi-Supervised Learning , 2010, Encyclopedia of Machine Learning.
[21] Tengyu Ma,et al. Understanding Self-Training for Gradual Domain Adaptation , 2020, ICML.
[22] Chao Zhang,et al. BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision , 2020, KDD.
[23] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[24] Heng Ji,et al. Cross-lingual Name Tagging and Linking for 282 Languages , 2017, ACL.
[25] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[26] Francesco Caltagirone,et al. Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces , 2018, ArXiv.
[27] Pascal Fua,et al. Learning Active Learning from Data , 2017, NIPS.
[28] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[29] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[30] Abhinav Gupta,et al. Training Region-Based Object Detectors with Online Hard Example Mining , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[32] Xinyue Liu,et al. SeqVAT: Virtual Adversarial Training for Semi-Supervised Sequence Labeling , 2020, ACL.
[33] Barbara Plank,et al. Strong Baselines for Neural Semi-Supervised Learning under Domain Shift , 2018, ACL.
[34] Scott Miller,et al. Name Tagging with Word Clusters and Discriminative Training , 2004, NAACL.
[35] Graham Neubig,et al. StructVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing , 2018, ACL.
[36] Jiajun Shen,et al. Revisiting Self-Training for Neural Sequence Generation , 2020, ICLR.
[37] Emmeleia-Panagiota Mastoropoulou,et al. Enhancing Deep Active Learning Using Selective Self-Training For Image Classification , 2019 .
[38] Quoc V. Le,et al. Semi-Supervised Sequence Modeling with Cross-View Training , 2018, EMNLP.
[39] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[40] James R. Glass,et al. Asgard: A portable architecture for multilingual dialogue systems , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[41] Graham Neubig,et al. Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction , 2017, ACL.
[42] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[43] Bernt Schiele,et al. Meta-Transfer Learning for Few-Shot Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Qi Xie,et al. Self-Paced Co-training , 2017, ICML.
[45] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[46] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[47] Quoc V. Le,et al. Rethinking Pre-training and Self-training , 2020, NeurIPS.
[48] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[49] Bernt Schiele,et al. Learning to Self-Train for Semi-Supervised Few-Shot Classification , 2019, NeurIPS.