暂无分享,去创建一个
Jiwei Li | Yuxian Meng | Fei Wu | Xiaofei Sun | Chun Fan | Zijun Sun
[1] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[2] Garrison W. Cottrell,et al. Improving Neural Story Generation by Targeted Common Sense Grounding , 2019, EMNLP.
[3] Ashwin K. Vijayakumar,et al. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models , 2016, ArXiv.
[4] Ido Dagan,et al. Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation , 2019, NAACL.
[5] Pascal Poupart,et al. Order-Planning Neural Text Generation From Structured Data , 2017, AAAI.
[6] Alexander M. Rush,et al. End-to-End Content and Plan Selection for Data-to-Text Generation , 2018, INLG.
[7] Dongyan Zhao,et al. Plan-And-Write: Towards Better Automatic Storytelling , 2018, AAAI.
[8] Alexander M. Rush,et al. Learning Neural Templates for Text Generation , 2018, EMNLP.
[9] Richard Socher,et al. Pointer Sentinel Mixture Models , 2016, ICLR.
[10] Juan Enrique Ramos,et al. Using TF-IDF to Determine Word Relevance in Document Queries , 2003 .
[11] Daniel Jurafsky,et al. Data Noising as Smoothing in Neural Network Language Models , 2017, ICLR.
[12] Alexander M. Rush,et al. Bottom-Up Abstractive Summarization , 2018, EMNLP.
[13] Lucy Vanderwende,et al. Exploring Content Models for Multi-Document Summarization , 2009, NAACL.
[14] Xin Wang,et al. Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models , 2019, ACL.
[15] Daniel Jurafsky,et al. Neural Net Models of Open-domain Discourse Coherence , 2016, EMNLP.
[16] Minlie Huang,et al. Story Ending Generation with Incremental Encoding and Commonsense Knowledge , 2018, AAAI.
[17] Deyu Zhou,et al. Neural Storyline Extraction Model for Storyline Generation from News Articles , 2018, NAACL.
[18] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[19] Alexander M. Rush,et al. Challenges in Data-to-Document Generation , 2017, EMNLP.
[20] Arman Cohan,et al. Longformer: The Long-Document Transformer , 2020, ArXiv.
[21] Ilya Sutskever,et al. Generating Long Sequences with Sparse Transformers , 2019, ArXiv.
[22] Yong Yu,et al. Long Text Generation via Adversarial Training with Leaked Information , 2017, AAAI.
[23] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[24] Samy Bengio,et al. Generating Sentences from a Continuous Space , 2015, CoNLL.
[25] Ali Farhadi,et al. Defending Against Neural Fake News , 2019, NeurIPS.
[26] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[27] Xu Sun,et al. A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation , 2018, EMNLP.
[28] Yiming Yang,et al. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.
[29] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[30] Rada Mihalcea,et al. TextRank: Bringing Order into Text , 2004, EMNLP.
[31] Lukasz Kaiser,et al. Generating Wikipedia by Summarizing Long Sequences , 2018, ICLR.
[32] Quoc V. Le,et al. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators , 2020, ICLR.
[33] Pengfei Liu,et al. Extractive Summarization as Text Matching , 2020, ACL.
[34] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[35] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[36] Daniel Jurafsky,et al. A Hierarchical Neural Autoencoder for Paragraphs and Documents , 2015, ACL.
[37] Mirella Lapata,et al. Data-to-Text Generation with Content Selection and Planning , 2018, AAAI.
[38] Jianfeng Gao,et al. Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space , 2020, EMNLP.
[39] Ming Zhou,et al. Ranking with Recursive Neural Networks and Its Application to Multi-Document Summarization , 2015, AAAI.
[40] Geoffrey Zweig,et al. From captions to visual concepts and back , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Xiaoya Li,et al. SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection , 2020, NeurIPS.
[42] Lukasz Kaiser,et al. Reformer: The Efficient Transformer , 2020, ICLR.
[43] Yansong Feng,et al. Paraphrase Generation with Latent Bag of Words , 2020, NeurIPS.
[44] André F. T. Martins,et al. Adaptively Sparse Transformers , 2019, EMNLP.
[45] Mahdieh Soleymani Baghshah,et al. Jointly Measuring Diversity and Quality in Text Generation Models , 2019, Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation.
[46] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[47] Geoffrey E. Hinton,et al. Regularizing Neural Networks by Penalizing Confident Output Distributions , 2017, ICLR.
[48] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[49] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[50] Jianfeng Gao,et al. A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.
[51] Samuel R. Bowman,et al. Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning , 2017, ArXiv.
[52] Jianfeng Gao,et al. UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training , 2020, ICML.
[53] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[54] Jiusheng Chen,et al. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training , 2020, EMNLP.
[55] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[56] Zheng Zhang,et al. BP-Transformer: Modelling Long-Range Context via Binary Partitioning , 2019, ArXiv.
[57] Alan Ritter,et al. Adversarial Learning for Neural Dialogue Generation , 2017, EMNLP.
[58] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[59] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[60] Regina Barzilay,et al. Blank Language Models , 2020, EMNLP.
[61] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[62] Eric P. Xing,et al. Progressive Generation of Long Text , 2020, ArXiv.
[63] Yann Dauphin,et al. Language Modeling with Gated Convolutional Networks , 2016, ICML.
[64] Lu Wang,et al. Sentence-Level Content Planning and Style Specification for Neural Text Generation , 2019, EMNLP.
[65] Edouard Grave,et al. Adaptive Attention Span in Transformers , 2019, ACL.
[66] M. de Rijke,et al. Leveraging Contextual Sentence Relations for Extractive Summarization Using a Neural Attention Model , 2017, SIGIR.
[67] Tiejun Zhao,et al. Neural Document Summarization by Jointly Learning to Score and Select Sentences , 2018, ACL.
[68] Wei Xu,et al. Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[69] Li Yang,et al. Big Bird: Transformers for Longer Sequences , 2020, NeurIPS.
[70] Oriol Vinyals,et al. Adversarial Evaluation of Dialogue Models , 2017, ArXiv.
[71] Mirella Lapata,et al. Hierarchical Transformers for Multi-Document Summarization , 2019, ACL.
[72] Yi Tay,et al. Synthesizer: Rethinking Self-Attention for Transformer Models , 2020, ICML.