暂无分享,去创建一个
[1] Chandan K. Reddy,et al. LeafNATS: An Open-Source Toolkit and Live Demo System for Neural Abstractive Text Summarization , 2019, NAACL.
[2] Dragomir R. Radev,et al. Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model , 2019, ACL.
[3] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[4] Lukasz Kaiser,et al. Sample Efficient Text Summarization Using a Single Pre-Trained Transformer , 2019, ArXiv.
[5] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[6] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[7] Bowen Zhou,et al. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents , 2016, AAAI.
[8] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[9] Franck Dernoncourt,et al. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents , 2018, NAACL.
[10] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[11] Treebank Penn,et al. Linguistic Data Consortium , 1999 .
[12] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[13] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[14] Quoc V. Le,et al. Unsupervised Pretraining for Sequence to Sequence Learning , 2016, EMNLP.
[15] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[16] Noam Shazeer,et al. Adafactor: Adaptive Learning Rates with Sublinear Memory Cost , 2018, ICML.
[17] Taku Kudo,et al. Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates , 2018, ACL.
[18] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[19] Xuanjing Huang,et al. Searching for Effective Neural Extractive Summarization: What Works and What’s Next , 2019, ACL.
[20] Richard Socher,et al. Neural Text Summarization: A Critical Evaluation , 2019, EMNLP.
[21] Noah A. Smith,et al. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2016, ACL 2016.
[22] Shashi Narayan,et al. Leveraging Pre-trained Checkpoints for Sequence Generation Tasks , 2019, Transactions of the Association for Computational Linguistics.
[23] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[24] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[25] Lu Wang,et al. BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization , 2019, ACL.
[26] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[27] Radu Soricut,et al. Multi-stage Pretraining for Abstractive Summarization , 2019, ArXiv.
[28] Yiming Yang,et al. The Enron Corpus: A New Dataset for Email Classi(cid:12)cation Research , 2004 .
[29] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[30] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[31] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[32] Gunhee Kim,et al. Abstractive Summarization of Reddit Posts with Multi-level Memory Networks , 2018, NAACL.
[33] Jason Weston,et al. Neural Text Generation with Unlikelihood Training , 2019, ICLR.
[34] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[35] Lukasz Kaiser,et al. Generating Wikipedia by Summarizing Long Sequences , 2018, ICLR.
[36] Anastassia Kornilova,et al. BillSum: A Corpus for Automatic Summarization of US Legislation , 2019, EMNLP.
[37] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[38] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[39] Mor Naaman,et al. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies , 2018, NAACL.
[40] Christopher D. Manning,et al. Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.
[41] Joel R. Tetreault,et al. This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation , 2019, ACL.
[42] Maria Leonor Pacheco,et al. of the Association for Computational Linguistics: , 2001 .
[43] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[44] Benno Stein,et al. TL;DR: Mining Reddit to Learn Automatic Summarization , 2017, NFiS@EMNLP.
[45] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[46] Sandeep Subramanian,et al. On Extractive and Abstractive Neural Document Summarization with Transformer Language Models , 2020, EMNLP.
[47] 知秀 柴田. 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .
[48] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[49] William Yang Wang,et al. WikiHow: A Large Scale Text Summarization Dataset , 2018, ArXiv.
[50] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[51] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[52] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.