SOURCE: SOURce-Conditional Elmo-style Model for Machine Translation Quality Estimation
暂无分享,去创建一个
[1] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[2] Marcin Junczys-Dowmunt,et al. Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing , 2016, WMT.
[3] Jong-Hyeok Lee,et al. A Recurrent Neural Networks Approach for Estimating the Quality of Machine Translation Output , 2016, NAACL.
[4] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[5] Marcin Junczys-Dowmunt,et al. An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing , 2017, IJCNLP.
[6] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[7] Bo Li,et al. Alibaba Submission for WMT18 Quality Estimation Task , 2018, WMT.
[8] Matteo Negri,et al. Findings of the WMT 2018 Shared Task on Automatic Post-Editing , 2018, WMT.
[9] Kai Fan,et al. "Bilingual Expert" Can Find Translation Errors , 2018, AAAI.
[10] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[11] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[12] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[13] Matthew G. Snover,et al. A Study of Translation Edit Rate with Targeted Human Annotation , 2006, AMTA.
[14] Alexander M. Rush,et al. OpenNMT: Open-Source Toolkit for Neural Machine Translation , 2017, ACL.