OpenNMT: Neural Machine Translation Toolkit
暂无分享,去创建一个
Alexander M. Rush | Yoon Kim | Vincent Nguyen | Jean Senellart | Yuntian Deng | Guillaume Klein | Yoon Kim | Yuntian Deng | Jean Senellart | Vincent Nguyen | Guillaume Klein
[1] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[2] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[3] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[4] Rico Sennrich,et al. Linguistic Input Features Improve Neural Machine Translation , 2016, WMT.
[5] Quoc V. Le,et al. Addressing the Rare Word Problem in Neural Machine Translation , 2014, ACL.
[6] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[7] Bo Wang,et al. SYSTRAN's Pure Neural Machine Translation Systems , 2016, ArXiv.
[8] Jan Niehues,et al. Effective Strategies in Zero-Shot Neural Machine Translation , 2017, IWSLT.
[9] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[10] Quoc V. Le,et al. Listen, Attend and Spell , 2015, ArXiv.
[11] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[12] Johan Bos,et al. Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations , 2017, ArXiv.
[13] Pavel Levin,et al. Toward a full-scale neural machine translation in production: the Booking.com use case , 2017, MTSUMMIT.
[14] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[15] Quoc V. Le,et al. Massive Exploration of Neural Machine Translation Architectures , 2017, EMNLP.
[16] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[17] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[18] Vladimir Eidelman,et al. cdec: A Decoder, Alignment, and Learning Framework for Finite- State and Context-Free Translation Models , 2010, ACL.
[19] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[20] Alexander M. Rush,et al. Structured Attention Networks , 2017, ICLR.
[21] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[22] Claire Gardent,et al. The WebNLG Challenge: Generating Text from RDF Data , 2017, INLG.
[23] Alexander M. Rush,et al. Challenges in Data-to-Document Generation , 2017, EMNLP.
[24] Yann Dauphin,et al. Convolutional Sequence to Sequence Learning , 2017, ICML.
[25] Alexander M. Rush,et al. Abstractive Sentence Summarization with Attentive Recurrent Neural Networks , 2016, NAACL.
[26] Philipp Koehn,et al. Moses: Open Source Toolkit for Statistical Machine Translation , 2007, ACL.
[27] Matt Post,et al. We start by defining the recurrent architecture as implemented in S OCKEYE , following , 2018 .
[28] Hang Li,et al. “ Tony ” DNN Embedding for “ Tony ” Selective Read for “ Tony ” ( a ) Attention-based Encoder-Decoder ( RNNSearch ) ( c ) State Update s 4 SourceVocabulary Softmax Prob , 2016 .
[29] Navdeep Jaitly,et al. Pointer Networks , 2015, NIPS.
[30] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[31] Alexander M. Rush,et al. Image-to-Markup Generation with Coarse-to-Fine Attention , 2016, ICML.
[32] Dapeng Li,et al. OSU Multimodal Machine Translation System Report , 2017, WMT.
[33] Alexander M. Rush,et al. Coarse-to-Fine Attention Models for Document Summarization , 2017, NFiS@EMNLP.
[34] Quoc V. Le,et al. A Neural Conversational Model , 2015, ArXiv.
[35] Ahmed Guessoum,et al. Arabic Machine Transliteration using an Attention-based Encoder-decoder Model , 2017, ACLING.
[36] Ramón Fernández Astudillo,et al. From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification , 2016, ICML.