暂无分享,去创建一个
Huda Khayrallah | Philipp Koehn | Shuoyang Ding | Weiting Tan | Philipp Koehn | Huda Khayrallah | Shuoyang Ding | Weiting Tan
[1] Alon Lavie,et al. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments , 2005, IEEvaluation@ACL.
[2] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[3] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[4] Philipp Koehn,et al. Findings of the 2014 Workshop on Statistical Machine Translation , 2014, WMT@ACL.
[5] Dejing Dou,et al. On Adversarial Examples for Character-Level Neural Machine Translation , 2018, COLING.
[6] Philipp Koehn,et al. Findings of the 2017 Conference on Machine Translation (WMT17) , 2017, WMT.
[7] Raphael Shu,et al. Reward Optimization for Neural Machine Translation with Learned Metrics , 2021, ArXiv.
[8] Masaaki Nagata,et al. NTT’s Machine Translation Systems for WMT19 Robustness Task , 2019, WMT.
[9] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[10] Andrew M. Dai,et al. Adversarial Training Methods for Semi-Supervised Text Classification , 2016, ICLR.
[11] Renjie Zheng,et al. Robust Machine Translation with Domain Sensitive Pseudo-Sources: Baidu-OSU WMT19 MT Robustness Shared Task System Report , 2019, WMT.
[12] Yonatan Belinkov,et al. Synthetic and Natural Noise Both Break Neural Machine Translation , 2017, ICLR.
[13] Matt Post,et al. A Call for Clarity in Reporting BLEU Scores , 2018, WMT.
[14] Yang Liu,et al. Minimum Risk Training for Neural Machine Translation , 2015, ACL.
[15] Sho Takase,et al. Rethinking Perturbations in Encoder-Decoders for Fast Training , 2021, NAACL.
[16] Graham Neubig,et al. MTNT: A Testbed for Machine Translation of Noisy Text , 2018, EMNLP.
[17] Yaser Al-Onaizan,et al. Evaluating Robustness to Input Perturbations for Neural Machine Translation , 2020, ACL.
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] Yonatan Belinkov,et al. Findings of the First Shared Task on Machine Translation Robustness , 2019, WMT.
[20] Pan He,et al. Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[21] Tie-Yan Liu,et al. Dual Learning for Machine Translation , 2016, NIPS.
[22] Xinyu Dai,et al. A Reinforced Generation of Adversarial Samples for Neural Machine Translation , 2019, ArXiv.
[23] Cristian Grozea. System Description: The Submission of FOKUS to the WMT 19 Robustness Task , 2019, WMT.
[24] Alon Lavie,et al. COMET: A Neural Framework for MT Evaluation , 2020, EMNLP.
[25] Jacob Eisenstein,et al. AdvAug: Robust Adversarial Augmentation for Neural Machine Translation , 2020, ACL.
[26] Graham Neubig,et al. On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models , 2019, NAACL.
[27] Yong Cheng,et al. Robust Neural Machine Translation with Doubly Adversarial Inputs , 2019, ACL.
[28] Jun Suzuki,et al. Effective Adversarial Regularization for Neural Machine Translation , 2019, ACL.
[29] Omer Levy,et al. Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation , 2019, EMNLP.
[30] Daniel Rueckert,et al. Realistic Adversarial Data Augmentation for MR Image Segmentation , 2020, MICCAI.