Intermediate Self-supervised Learning for Machine Translation Quality Estimation
暂无分享,去创建一个
[1] Nello Cristianini,et al. Estimating the Sentence-Level Quality of Machine Translation Systems , 2009, EAMT.
[2] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[3] Doug Downey,et al. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks , 2020, ACL.
[4] Kilian Q. Weinberger,et al. Revisiting Few-sample BERT Fine-tuning , 2020, ArXiv.
[5] André F. T. Martins,et al. Findings of the WMT 2019 Shared Tasks on Quality Estimation , 2019, WMT.
[6] Chris Quirk,et al. Training a Sentence-Level Machine Translation Confidence Measure , 2004, LREC.
[7] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[8] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[9] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[10] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[11] Karin M. Verspoor,et al. Findings of the 2016 Conference on Machine Translation , 2016, WMT.
[12] André F. T. Martins,et al. Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task , 2019, WMT.
[13] Marius Mosbach,et al. On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines , 2020, ArXiv.
[14] Jong-Hyeok Lee,et al. Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation , 2017, WMT.
[15] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[16] Samuel R. Bowman,et al. Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks , 2018, ArXiv.
[17] Hyun Kim,et al. QE BERT: Bilingual BERT Using Multi-task Learning for Neural Quality Estimation , 2019, WMT.
[18] Matthew G. Snover,et al. A Study of Translation Edit Rate with Targeted Human Annotation , 2006, AMTA.
[19] Raphael Rubino. NICT Kyoto Submission for the WMT'20 Quality Estimation Task: Intermediate Training for Domain and Task Adaptation , 2020, WMT@EMNLP.
[20] Marco Turchi,et al. ESCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing , 2018, LREC.
[21] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.