In this paper, we present our submission for the English to Czech Text Translation Task of IWSLT 2019. Our system aims to study how pre-trained language models, used as input embeddings, can improve a specialized machine translation system trained on few data. Therefore, we implemented a Transformer-based encoder-decoder neural system which is able to use the output of a pre-trained language model as input embeddings, and we compared its performance under three configurations: 1) without any pre-trained language model (constrained), 2) using a language model trained on the monolingual parts of the allowed English-Czech data (constrained), and 3) using a language model trained on a large quantity of external monolingual data (unconstrained). We used BERT as external pre-trained language model (configuration 3), and BERT architecture for training our own language model (configuration 2). Regarding the training data, we trained our MT system on a small quantity of parallel text: one set only consists of the provided MuST-C corpus, and the other set consists of the MuST-C corpus and the News Commentary corpus from WMT. We observed that using the external pre-trained BERT improves the scores of our system by +0.8 to +1.5 of BLEU on our development set, and +0.97 to +1.94 of BLEU on the test set. However, using our own language model trained only on the allowed parallel data seems to improve the machine translation performances only when the system is trained on the smallest dataset.
[1]
Quoc V. Le,et al.
Unsupervised Pretraining for Sequence to Sequence Learning
,
2016,
EMNLP.
[2]
Ming-Wei Chang,et al.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
,
2019,
NAACL.
[3]
Yiming Yang,et al.
XLNet: Generalized Autoregressive Pretraining for Language Understanding
,
2019,
NeurIPS.
[4]
Salim Roukos,et al.
Bleu: a Method for Automatic Evaluation of Machine Translation
,
2002,
ACL.
[5]
Guillaume Lample,et al.
Cross-lingual Language Model Pretraining
,
2019,
NeurIPS.
[6]
Mattia Antonino Di Gangi,et al.
MuST-C: a Multilingual Speech Translation Corpus
,
2019,
NAACL.
[7]
Lukasz Kaiser,et al.
Attention is All you Need
,
2017,
NIPS.
[8]
Jimmy Ba,et al.
Adam: A Method for Stochastic Optimization
,
2014,
ICLR.
[9]
Luke S. Zettlemoyer,et al.
Deep Contextualized Word Representations
,
2018,
NAACL.
[10]
Marta R. Costa-jussà,et al.
Findings of the 2019 Conference on Machine Translation (WMT19)
,
2019,
WMT.
[11]
Omer Levy,et al.
RoBERTa: A Robustly Optimized BERT Pretraining Approach
,
2019,
ArXiv.