Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy

Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.

[1]  Haifeng Wang,et al.  STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework , 2018, ACL.

[2]  Geoffrey E. Hinton,et al.  Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.

[3]  Anoop Sarkar,et al.  Prediction Improves Simultaneous Neural Machine Translation , 2018, EMNLP.

[4]  Roy Schwartz,et al.  A Mixture of h - 1 Heads is Better than h Heads , 2020, ACL.

[5]  Kenneth Ward Church,et al.  Fluent and Low-latency Simultaneous Speech-to-Speech Translation with Self-adaptive Training , 2020, FINDINGS.

[6]  Yang Feng,et al.  Future-Guided Incremental Transformer for Simultaneous Translation , 2020, AAAI.

[7]  Nadir Durrani,et al.  Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation , 2018, NAACL.

[8]  Anoop Sarkar,et al.  Simultaneous Translation using Optimized Segmentation , 2018, AMTA.

[9]  Jason Lee,et al.  Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement , 2018, EMNLP.

[10]  Hannaneh Hajishirzi,et al.  Mixture Content Selection for Diverse Sequence Generation , 2019, EMNLP.

[11]  Sathish Reddy Indurthi,et al.  End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning , 2020, IWSLT.

[12]  Evgeny Matusov,et al.  Neural Simultaneous Speech Translation Using Alignment-Based Chunking , 2020, IWSLT.

[13]  Rich Caruana,et al.  Ensemble selection from libraries of models , 2004, ICML.

[14]  Jian Li,et al.  Multi-Head Attention with Disagreement Regularization , 2018, EMNLP.

[15]  Zhe Zhao,et al.  Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts , 2018, KDD.

[16]  Colin Raffel,et al.  Online and Linear-Time Attention by Enforcing Monotonic Alignments , 2017, ICML.

[17]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[18]  Zhongjun He,et al.  Learning Adaptive Segmentation Policy for Simultaneous Translation , 2020, EMNLP.

[19]  Danqi Chen,et al.  of the Association for Computational Linguistics: , 2001 .

[20]  Matteo Negri,et al.  End-to-End Speech-Translation with Knowledge Distillation: FBK@IWSLT2020 , 2020, IWSLT.

[21]  Renjie Zheng,et al.  Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation , 2019, EMNLP.

[22]  Liang Huang,et al.  Improving Simultaneous Translation with Pseudo References , 2020, ArXiv.

[23]  Srinivas Bangalore,et al.  Real-time Incremental Speech-to-Speech Translation of Dialogs , 2012, NAACL.

[24]  Cho-Jui Hsieh,et al.  Towards Robust Neural Networks via Random Self-ensemble , 2017, ECCV.

[25]  Marc'Aurelio Ranzato,et al.  Mixture Models for Diverse Machine Translation: Tricks of the Trade , 2019, ICML.

[26]  Benjamin Lecouteux,et al.  ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020 , 2020, IWSLT.

[27]  Geoffrey E. Hinton,et al.  Adaptive Mixtures of Local Experts , 1991, Neural Computation.

[28]  Renjie Zheng,et al.  Simultaneous Translation with Flexible Policy via Restricted Imitation Learning , 2019, ACL.

[29]  Wei Li,et al.  Monotonic Infinite Lookback Attention for Simultaneous Machine Translation , 2019, ACL.

[30]  Liang Huang,et al.  Simultaneous Translation Policies: From Fixed to Adaptive , 2020, ACL.

[31]  Jan Niehues,et al.  The IWSLT 2015 Evaluation Campaign , 2015, IWSLT.

[32]  Graham Neubig,et al.  Learning to Translate in Real-time with Neural Machine Translation , 2016, EACL.

[33]  Matt Post,et al.  A Call for Clarity in Reporting BLEU Scores , 2018, WMT.

[34]  Chuanqiang Zhang,et al.  Dynamic Sentence Boundary Detection for Simultaneous Translation , 2020, AUTOSIMTRANS.

[35]  Jakob Verbeek,et al.  Efficient Wait-k Models for Simultaneous Machine Translation , 2020, INTERSPEECH.

[36]  Myle Ott,et al.  fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.

[37]  Marc'Aurelio Ranzato,et al.  Learning Factored Representations in a Deep Mixture of Experts , 2013, ICLR.

[38]  Gholamreza Haffari,et al.  Sequence to Sequence Mixture Model for Diverse Machine Translation , 2018, CoNLL.

[39]  Jordan L. Boyd-Graber,et al.  Don't Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation , 2014, EMNLP.

[40]  Noah A. Smith,et al.  A Simple, Fast, and Effective Reparameterization of IBM Model 2 , 2013, NAACL.

[41]  Kyunghyun Cho,et al.  Can neural machine translation do simultaneous translation? , 2016, ArXiv.

[42]  Shaolei Zhang,et al.  ICT’s System for AutoSimTrans 2021: Robust Char-Level Simultaneous Translation , 2021, AUTOSIMTRANS.

[43]  Juan Pino,et al.  Monotonic Multihead Attention , 2019, ICLR.

[44]  Rico Sennrich,et al.  Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.

[45]  Yuanjie Wang,et al.  BIT’s system for the AutoSimTrans 2020 , 2020, AUTOSIMTRANS.

[46]  Georges Quénot,et al.  Coupled Ensembles of Neural Networks , 2017, 2018 International Conference on Content-Based Multimedia Indexing (CBMI).

[47]  Evgeny Matusov,et al.  Start-Before-End and End-to-End: Neural Speech Translation by AppTek and RWTH Aachen University , 2020, IWSLT.