Pre-training for Abstractive Document Summarization by Reinstating Source Text
暂无分享,去创建一个
Ming Zhou | Furu Wei | Wei Lu | Xingxing Zhang | Yanyan Zou | M. Zhou | Xingxing Zhang | Furu Wei | Wei Lu | Yanyan Zou
[1] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[2] 悠太 菊池,et al. 大規模要約資源としてのNew York Times Annotated Corpus , 2015 .
[3] Dan Klein,et al. Learning-Based Single-Document Summarization with Compression and Anaphoricity Constraints , 2016, ACL.
[4] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[5] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[6] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[7] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[8] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[9] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[10] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[11] Jiusheng Chen,et al. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training , 2020, EMNLP.
[12] Francine Chen,et al. A trainable document summarizer , 1995, SIGIR '95.
[13] Yen-Chun Chen,et al. Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting , 2018, ACL.
[14] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[15] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[16] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[17] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[18] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[19] Vasileios Hatzivassiloglou,et al. Event-Based Extractive Summarization , 2004 .
[20] Min Sun,et al. A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss , 2018, ACL.
[21] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[22] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[23] Ani Nenkova,et al. A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization , 2006, SIGIR.
[24] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[25] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[26] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.
[27] Ming Zhou,et al. HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization , 2019, ACL.
[28] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Mirella Lapata,et al. Ranking Sentences for Extractive Summarization with Reinforcement Learning , 2018, NAACL.
[30] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[31] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[32] Wai Lam,et al. MEAD - A Platform for Multidocument Multilingual Text Summarization , 2004, LREC.
[33] Bowen Zhou,et al. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents , 2016, AAAI.
[34] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[35] Dianne P. O'Leary,et al. Text summarization via hidden Markov models , 2001, SIGIR '01.
[36] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[37] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[38] Mirella Lapata,et al. Neural Summarization by Extracting Sentences and Words , 2016, ACL.
[39] Mirella Lapata,et al. Neural Latent Extractive Document Summarization , 2018, EMNLP.
[40] Yejin Choi,et al. Deep Communicating Agents for Abstractive Summarization , 2018, NAACL.
[41] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[42] Christopher D. Manning,et al. Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.
[43] Ani Nenkova,et al. Automatic Summarization , 2011, ACL.
[44] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[45] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[46] Alexander M. Rush,et al. Bottom-Up Abstractive Summarization , 2018, EMNLP.
[47] Hang Li,et al. “ Tony ” DNN Embedding for “ Tony ” Selective Read for “ Tony ” ( a ) Attention-based Encoder-Decoder ( RNNSearch ) ( c ) State Update s 4 SourceVocabulary Softmax Prob , 2016 .