暂无分享,去创建一个
[1] Joelle Pineau,et al. An Actor-Critic Algorithm for Sequence Prediction , 2016, ICLR.
[2] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[3] Stanislau Semeniuta,et al. On Accurate Evaluation of GANs for Language Generation , 2018, ArXiv.
[4] Christopher D. Manning,et al. A Structural Probe for Finding Syntax in Word Representations , 2019, NAACL.
[5] Richard Socher,et al. Neural Text Summarization: A Critical Evaluation , 2019, EMNLP.
[6] Lantao Yu,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[7] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[8] Andrew M. Dai,et al. MaskGAN: Better Text Generation via Filling in the ______ , 2018, ICLR.
[9] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[10] Chi-kiu Lo,et al. Fully Unsupervised Crosslingual Semantic Textual Similarity Metric Based on BERT for Identifying Parallel Data , 2019, CoNLL.
[11] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[12] Ying Qin,et al. Truly Exploring Multiple References for Machine Translation Evaluation , 2015, EAMT.
[13] Eric P. Xing,et al. Connecting the Dots Between MLE and RL for Sequence Generation , 2018, DeepRLStructPred@ICLR.
[14] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[15] Marc'Aurelio Ranzato,et al. Sequence Level Training with Recurrent Neural Networks , 2015, ICLR.
[16] Philipp Koehn,et al. Re-evaluating the Role of Bleu in Machine Translation Research , 2006, EACL.
[17] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[18] Joelle Pineau,et al. Language GANs Falling Short , 2018, ICLR.
[19] Florian Schmidt. Generalization in Generation: A closer look at Exposure Bias , 2019, NGT@EMNLP-IJCNLP.
[20] Martin Wattenberg,et al. Visualizing and Measuring the Geometry of BERT , 2019, NeurIPS.
[21] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[22] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[23] Kawin Ethayarajh,et al. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings , 2019, EMNLP.
[24] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[25] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[26] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[27] Fei Liu,et al. MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance , 2019, EMNLP.
[28] Idan Szpektor,et al. DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion , 2019, NAACL.
[29] Vassilina Nikoulina,et al. On the use of BERT for Neural Machine Translation , 2019, EMNLP.
[30] Alex Wang,et al. Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling , 2018, ArXiv.
[31] Luke S. Zettlemoyer,et al. Dissecting Contextual Word Embeddings: Architecture and Representation , 2018, EMNLP.
[32] Graham Neubig,et al. Beyond BLEU:Training Neural Machine Translation with Semantic Similarity , 2019, ACL.
[33] Vaibhava Goel,et al. Self-Critical Sequence Training for Image Captioning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[35] Joelle Pineau,et al. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models , 2015, AAAI.
[36] Alexander M. Rush,et al. Latent Normalizing Flows for Discrete Sequences , 2019, ICML.
[37] Noah A. Smith,et al. Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts , 2019, ACL.
[38] Sanjeev Arora,et al. Linear Algebraic Structure of Word Senses, with Applications to Polysemy , 2016, TACL.
[39] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[40] Ronald J. Williams,et al. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.
[41] Matt J. Kusner,et al. From Word Embeddings To Document Distances , 2015, ICML.
[42] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[43] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[44] Timothy Baldwin,et al. Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation , 2019, ACL.
[45] Yejin Choi,et al. Learning to Write with Cooperative Discriminators , 2018, ACL.
[46] Yoshua Bengio,et al. Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.
[47] Lijun Wu,et al. A Study of Reinforcement Learning for Neural Machine Translation , 2018, EMNLP.
[48] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[49] Sergei Vassilvitskii,et al. k-means++: the advantages of careful seeding , 2007, SODA '07.
[50] Michael Elhadad. Book Review: Natural Language Processing with Python by Steven Bird, Ewan Klein, and Edward Loper , 2010, CL.
[51] Alex Wang,et al. BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model , 2019, Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation.
[52] Anton Osokin,et al. SEARNN: Training RNNs with Global-Local Losses , 2017, ICLR.
[53] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[54] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[55] Omri Abend,et al. On the Weaknesses of Reinforcement Learning for Neural Machine Translation , 2019, ICLR.