Friendly Topic Assistant for Transformer Based Abstractive Summarization
暂无分享,去创建一个
Bo Chen | Mingyuan Zhou | Hao Zhang | Chaojie Wang | Zhengjue Wang | Zhibin Duan | Long Tian | Mingyuan Zhou | Bo Chen | Zhengjue Wang | Hao Zhang | Long Tian | Zhibin Duan | C. Wang
[1] Shashi Narayan,et al. Leveraging Pre-trained Checkpoints for Sequence Generation Tasks , 2019, Transactions of the Association for Computational Linguistics.
[2] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[3] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[4] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[5] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[6] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[7] Yoshua Bengio,et al. Feature-wise transformations , 2018, Distill.
[8] Alexander M. Rush,et al. Bottom-Up Abstractive Summarization , 2018, EMNLP.
[9] Samuel R. Bowman,et al. Can Unconditional Language Models Recover Arbitrary Sentences? , 2019, NeurIPS.
[10] Kam-Fai Wong,et al. Extractive Summarization Using Supervised and Semi-Supervised Learning , 2008, COLING.
[11] Yejin Choi,et al. Deep Communicating Agents for Abstractive Summarization , 2018, NAACL.
[12] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[13] Melissa Ailem,et al. Topic Augmented Generator for Abstractive Summarization , 2019, ArXiv.
[14] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[15] Christopher D. Manning,et al. Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.
[16] Yves Scherrer,et al. Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation , 2020, EMNLP.
[17] Hao Zhang,et al. WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling , 2018, ICLR.
[18] 悠太 菊池,et al. 大規模要約資源としてのNew York Times Annotated Corpus , 2015 .
[19] Ming Zhou,et al. HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization , 2019, ACL.
[20] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[21] Sandeep Subramanian,et al. On Extractive and Abstractive Neural Document Summarization with Transformer Language Models , 2020, EMNLP.
[22] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[23] Omer Levy,et al. Are Sixteen Heads Really Better than One? , 2019, NeurIPS.
[24] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[25] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[26] Jason Yosinski,et al. Plug and Play Language Models: A Simple Approach to Controlled Text Generation , 2020, ICLR.
[27] Akihiro Tamura,et al. Dependency-Based Self-Attention for Transformer NMT , 2019, RANLP.
[28] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[29] David B. Dunson,et al. Beta-Negative Binomial Process and Poisson Factor Analysis , 2011, AISTATS.
[30] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[31] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[32] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[33] Yang Liu,et al. Fine-tune BERT for Extractive Summarization , 2019, ArXiv.
[34] Nancy F. Chen,et al. Topic-Aware Pointer-Generator Networks for Summarizing Spoken Conversations , 2019, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[35] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[36] Hao Zhang,et al. Variational Hetero-Encoder Randomized Generative Adversarial Networks for Joint Image-Text Modeling , 2019, ArXiv.
[37] Michael I. Jordan,et al. Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..
[38] Andy Way,et al. Topic-Informed Neural Machine Translation , 2016, COLING.
[39] Kenneth Heafield,et al. Incorporating Source Syntax into Transformer-Based Neural Machine Translation , 2019, WMT.
[40] Bo Chen,et al. Learning Dynamic Hierarchical Topic Graph with Graph Convolutional Network for Document Classification , 2020, AISTATS.