暂无分享,去创建一个
Eric P. Xing | Zhiting Hu | Maruan Al-Shedivat | Zichao Yang | Bowen Tan | E. Xing | Zhiting Hu | Zichao Yang | Maruan Al-Shedivat | Bowen Tan
[1] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[2] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[3] Jakob Uszkoreit,et al. KERMIT: Generative Insertion-Based Modeling for Sequences , 2019, ArXiv.
[4] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[5] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[6] Mohammad Norouzi,et al. The Importance of Generation Order in Language Modeling , 2018, EMNLP.
[7] Dongyan Zhao,et al. Plan-And-Write: Towards Better Automatic Storytelling , 2018, AAAI.
[8] Daniel Jurafsky,et al. Data Noising as Smoothing in Neural Network Language Models , 2017, ICLR.
[9] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[10] Christopher D. Manning,et al. Do Massively Pretrained Language Models Make Better Storytellers? , 2019, CoNLL.
[11] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[12] Mahdieh Soleymani Baghshah,et al. Jointly Measuring Diversity and Quality in Text Generation Models , 2019, Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation.
[13] Alexander M. Rush,et al. Encoder-Agnostic Adaptation for Conditional Language Generation , 2019, ArXiv.
[14] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[15] Marc'Aurelio Ranzato,et al. Sequence Level Training with Recurrent Neural Networks , 2015, ICLR.
[16] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[17] Dongyan Zhao,et al. Towards Implicit Content-Introducing for Generative Short-Text Conversation Systems , 2017, EMNLP.
[18] Robert Dale,et al. Building applied natural language generation systems , 1997, Natural Language Engineering.
[19] Yann Dauphin,et al. Strategies for Structuring Story Generation , 2019, ACL.
[20] Mirella Lapata,et al. Data-to-Text Generation with Content Selection and Planning , 2018, AAAI.
[21] Lu Wang,et al. Sentence-Level Content Planning and Style Specification for Neural Text Generation , 2019, EMNLP.
[22] Xuanjing Huang,et al. Towards Diverse Text Generation with Inverse Reinforcement Learning , 2018, ArXiv.
[23] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[24] Garrison W. Cottrell,et al. Improving Neural Story Generation by Targeted Common Sense Grounding , 2019, EMNLP.
[25] Yiming Yang,et al. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.
[26] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[27] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[28] Jakob Uszkoreit,et al. Insertion Transformer: Flexible Sequence Generation via Insertion Operations , 2019, ICML.
[29] Taesung Park,et al. Semantic Image Synthesis With Spatially-Adaptive Normalization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Lukasz Kaiser,et al. Generating Wikipedia by Summarizing Long Sequences , 2018, ICLR.
[31] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[32] Kyunghyun Cho,et al. Non-Monotonic Sequential Text Generation , 2019, ICML.
[33] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[34] Eric Xing,et al. Connecting the Dots Between MLE and RL for Sequence Prediction , 2019 .
[35] Eric P. Xing,et al. Connecting the Dots Between MLE and RL for Sequence Generation , 2018, DeepRLStructPred@ICLR.
[36] Xuanjing Huang,et al. Toward Diverse Text Generation with Inverse Reinforcement Learning , 2018, IJCAI.
[37] Michael McGill,et al. Introduction to Modern Information Retrieval , 1983 .
[38] Ashwin K. Vijayakumar,et al. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models , 2016, ArXiv.
[39] Ido Dagan,et al. Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation , 2019, NAACL.
[40] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[41] Qi Liu,et al. Insertion-based Decoding with Automatically Inferred Generation Order , 2019, Transactions of the Association for Computational Linguistics.
[42] Zhe Gan,et al. Pointer: Constrained Text Generation via Insertion-based Generative Pre-training , 2020, EMNLP.