暂无分享,去创建一个
Kayo Yin | Graham Neubig | Patrick Fernandes | Andr'e F. T. Martins | André F. T. Martins | Patrick Fernandes | Graham Neubig | Kayo Yin
[1] Jörg Tiedemann,et al. Neural Machine Translation with Extended Context , 2017, DiscoMT@EMNLP.
[2] Rico Sennrich,et al. Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation , 2018, EMNLP.
[3] V. Braun,et al. Using thematic analysis in psychology , 2006 .
[4] Kenneth Ward Church,et al. Word Association Norms, Mutual Information, and Lexicography , 1989, ACL.
[5] Sharid Loáiciga,et al. A Pronoun Test Suite Evaluation of the English–German MT Systems at WMT 2018 , 2018, WMT.
[6] James Henderson,et al. Document-Level Neural Machine Translation with Hierarchical Attention Networks , 2018, EMNLP.
[7] Rico Sennrich,et al. Context-Aware Monolingual Repair for Neural Machine Translation , 2019, EMNLP.
[8] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[9] Jun Suzuki,et al. JParaCrawl: A Large Scale Web-Based English-Japanese Parallel Corpus , 2020, LREC.
[10] Hermann Ney,et al. When and Why is Document-level Context Useful in Neural Machine Translation? , 2019, EMNLP.
[11] Alon Lavie,et al. COMET: A Neural Framework for MT Evaluation , 2020, EMNLP.
[12] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[13] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[14] Rico Sennrich,et al. When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion , 2019, ACL.
[15] Rachel Bawden,et al. Document-level Neural MT: A Systematic Comparison , 2020, EAMT.
[16] Graham Neubig,et al. Measuring and Increasing Context Usage in Context-Aware Machine Translation , 2021, ACL.
[17] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[18] Mikel L. Forcada,et al. ParaCrawl: Web-scale parallel corpora for the languages of the EU , 2019, MTSummit.
[19] Preslav Nakov,et al. Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite , 2019, EMNLP.
[20] Rico Sennrich,et al. Context-Aware Neural Machine Translation Learns Anaphora Resolution , 2018, ACL.
[21] Rico Sennrich,et al. Evaluating Discourse Phenomena in Neural Machine Translation , 2017, NAACL.
[22] Rico Sennrich,et al. A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation , 2018, WMT.
[23] Ann Bies,et al. Bracketing Guidelines For Treebank II Style Penn Treebank Project , 1995 .
[24] Graham Neubig,et al. When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation? , 2018, NAACL.
[25] Christopher D. Manning,et al. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages , 2020, ACL.
[26] Marcello Federico,et al. Modelling pronominal anaphora in statistical machine translation , 2010, IWSLT.
[27] Gholamreza Haffari,et al. A Survey on Document-level Neural Machine Translation , 2021, ACM Comput. Surv..
[28] Gholamreza Haffari,et al. Document Context Neural Machine Translation with Memory Networks , 2017, ACL.
[29] Andy Way,et al. Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation , 2018, WMT.
[30] Marine Carpuat,et al. One Translation Per Discourse , 2009, SEW@NAACL-HLT.
[31] Shafiq R. Joty,et al. Can Your Context-Aware MT System Pass the DiP Benchmark Tests? : Evaluation Benchmarks for Discourse Phenomena in Machine Translation , 2020, ArXiv.
[32] André F. T. Martins,et al. Do Context-Aware Translation Models Pay the Right Attention? , 2021, ACL.
[33] Graham Neubig,et al. compare-mt: A Tool for Holistic Comparison of Language Generation Systems , 2019, NAACL.
[34] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[35] Graham Neubig,et al. Word Alignment by Fine-tuning Embeddings on Parallel Corpora , 2021, EACL.
[36] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.