暂无分享,去创建一个
Marc'Aurelio Ranzato | Wojciech Zaremba | Sumit Chopra | Michael Auli | S. Chopra | Marc'Aurelio Ranzato | Wojciech Zaremba | Michael Auli | M. Ranzato
[1] D. Rumelhart. Learning internal representations by back-propagating errors , 1986 .
[2] Jeffrey L. Elman,et al. Finding Structure in Time , 1990, Cogn. Sci..
[3] Hermann Ney,et al. Improved backing-off for M-gram language modeling , 1995, 1995 International Conference on Acoustics, Speech, and Signal Processing.
[4] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[5] Ehud Reiter,et al. Book Reviews: Building Natural Language Generation Systems , 2000, CL.
[6] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[7] Eduard H. Hovy,et al. Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics , 2003, NAACL.
[8] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[9] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[10] Yoshua Bengio,et al. Hierarchical Probabilistic Neural Network Language Model , 2005, AISTATS.
[11] Ben Taskar,et al. An End-to-End Discriminative Approach to Machine Translation , 2006, ACL.
[12] Philipp Koehn,et al. Moses: Open Source Toolkit for Statistical Machine Translation , 2007, ACL.
[13] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[14] Tamir Hazan,et al. Direct Loss Minimization for Structured Prediction , 2010, NIPS.
[15] Lukás Burget,et al. Recurrent neural network based language model , 2010, INTERSPEECH.
[16] Richard M. Schwartz,et al. Expected BLEU Training for Graphs: BBN System Description for WMT11 System Combination Task , 2011, WMT@EMNLP.
[17] Geoffrey J. Gordon,et al. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.
[18] Li Deng,et al. Maximum Expected BLEU Training of Phrase and Lexicon Translation Models , 2012, ACL.
[19] Marcello Federico,et al. Report on the 10th IWSLT evaluation campaign , 2013, IWSLT.
[20] Navdeep Jaitly,et al. Towards End-To-End Speech Recognition with Recurrent Neural Networks , 2014, ICML.
[21] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[22] Jianfeng Gao,et al. Decoder Integration and Expected BLEU Training for Recurrent Neural Network Language Models , 2014, ACL.
[23] Alex Graves,et al. Recurrent Models of Visual Attention , 2014, NIPS.
[24] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[25] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[26] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[27] Koray Kavukcuoglu,et al. Multiple Object Recognition with Visual Attention , 2014, ICLR.
[28] Martial Hebert,et al. Improving Multi-Step Prediction of Learned Time Series Models , 2015, AAAI.
[29] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[30] Wojciech Zaremba,et al. Reinforcement Learning Neural Turing Machines , 2015, ArXiv.
[31] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[32] Xinyun Chen. Under Review as a Conference Paper at Iclr 2017 Delving into Transferable Adversarial Ex- Amples and Black-box Attacks , 2016 .
[33] Omer Levy,et al. Published as a conference paper at ICLR 2018 S IMULATING A CTION D YNAMICS WITH N EURAL P ROCESS N ETWORKS , 2018 .
[34] J. Langford,et al. Search-Based Structured Prediction as Classification , 2022 .