暂无分享,去创建一个
Juan Pino | Hongyu Gong | Yun Tang | Xian Li | J. Pino | Xian Li | Hongyu Gong | Yun Tang
[1] Yong Wang,et al. Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks , 2019, AAAI.
[2] Georg Heigold,et al. Multilingual acoustic models using distributed deep neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[3] Pushpak Bhattacharyya,et al. Multilingual Unsupervised NMT using Shared Encoder and Language-Specific Decoders , 2019, ACL.
[4] Chenhui Chu,et al. A Survey of Multilingual Neural Machine Translation , 2019, ACM Comput. Surv..
[5] Victor O. K. Li,et al. Universal Neural Machine Translation for Extremely Low Resource Languages , 2018, NAACL.
[6] Graham Neubig,et al. Parameter Sharing Methods for Multilingual Self-Attentional Translation Models , 2018, WMT.
[7] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[8] Marjan Ghazvininejad,et al. Multilingual Denoising Pre-training for Neural Machine Translation , 2020, Transactions of the Association for Computational Linguistics.
[9] Yang Liu,et al. Multi-Domain Neural Machine Translation with Word-Level Domain Context Discrimination , 2018, EMNLP.
[10] Tiejun Zhao,et al. Gumbel-Attention for Multi-modal Machine Translation , 2021, ArXiv.
[11] Martti Vainio,et al. Proceedings of the Annual Conference of the International Speech Communication Association , 2016, Interspeech 2016.
[12] Zhao Chen,et al. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks , 2017, ICML.
[13] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[14] Martin Wattenberg,et al. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation , 2016, TACL.
[15] Sen Wu,et al. Understanding and Improving Information Transfer in Multi-Task Learning , 2020, ICLR.
[16] Marta R. Costa-jussà,et al. End-to-End Speech Translation with the Transformer , 2018, IberSPEECH.
[17] Elizabeth Salesky,et al. The Multilingual TEDx Corpus for Speech Recognition and Translation , 2021, Interspeech 2021.
[18] Xing Wang,et al. How Does Selective Mechanism Improve Self-Attention Networks? , 2020, ACL.
[19] Huda Khayrallah,et al. Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation , 2019, NAACL.
[20] Matteo Negri,et al. Adapting Transformer to End-to-End Spoken Language Translation , 2019, INTERSPEECH.
[21] Jonathan Le Roux,et al. Streaming Automatic Speech Recognition with the Transformer Model , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[22] Huda Khayrallah,et al. Regularized Training Objective for Continued Training for Domain Adaptation in Neural Machine Translation , 2018, NMT@ACL.
[23] Dmytro Okhonko,et al. fairseq S2T: Fast Speech-to-Text Modeling with fairseq , 2020, AACL.
[24] Ankur Bapna,et al. Simple, Scalable Adaptation for Neural Machine Translation , 2019, EMNLP.
[25] Dietrich Klakow,et al. Testing the correlation of word error rate and perplexity , 2002, Speech Commun..
[26] Orhan Firat,et al. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding , 2020, ICLR.
[27] Yun Tang,et al. Multilingual Speech Translation with Efficient Finetuning of Pretrained Models. , 2020 .
[28] Juan Pino,et al. CoVoST 2: A Massively Multilingual Speech-to-Text Translation Corpus , 2020, ArXiv.
[29] S. Levine,et al. Gradient Surgery for Multi-Task Learning , 2020, NeurIPS.
[30] Alfons Juan-Císcar,et al. Europarl-ST: A Multilingual Corpus for Speech Translation of Parliamentary Debates , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[31] Daniel Ulbricht,et al. Learning to Branch for Multi-Task Learning , 2020, ICML.
[32] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[33] Danielle Saunders. Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey , 2021, ArXiv.
[34] Gabriel Synnaeve,et al. Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters , 2020, INTERSPEECH.
[35] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[36] Jitendra Malik,et al. Which Tasks Should Be Learned Together in Multi-task Learning? , 2019, ICML.
[37] Miguel Ballesteros,et al. Multilingual Neural Machine Translation with Task-Specific Attention , 2018, COLING.
[38] Dakwale,et al. Fine-Tuning for Neural Machine Translation with Limited Degradation across In- and Out-of-Domain Data , 2017, MTSUMMIT.
[39] J. Pino,et al. CoVoST 2 and Massively Multilingual Speech-to-Text Translation , 2020 .
[40] Kevin Duh,et al. Multilingual End-to-End Speech Translation , 2019, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[41] Geoffrey E. Hinton,et al. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer , 2017, ICLR.
[42] François Yvon,et al. Revisiting Multi-Domain Machine Translation , 2021, TACL.
[43] Josep Maria Crego,et al. Domain Control for Neural Machine Translation , 2016, RANLP.
[44] Noam Shazeer,et al. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity , 2021, ArXiv.