Multi-Fact Correction in Abstractive Text Summarization
暂无分享,去创建一个
Jackie Chi Kit Cheung | Yu Cheng | Zhe Gan | Jingjing Liu | Shuohang Wang | Yue Dong | J. C. Cheung | Yu Cheng | Zhe Gan | Shuohang Wang | Jingjing Liu | Yue Dong | J. Cheung
[1] Quoc V. Le,et al. A Neural Conversational Model , 2015, ArXiv.
[2] Fei Liu,et al. Joint Parsing and Generation for Abstractive Summarization , 2019, AAAI.
[3] Jackie Chi Kit Cheung,et al. Factual Error Correction for Abstractive Summarization Models , 2020, EMNLP.
[4] Chandan K. Reddy,et al. LeafNATS: An Open-Source Toolkit and Live Demo System for Neural Abstractive Text Summarization , 2019, NAACL.
[5] Furu Wei,et al. Faithful to the Original: Fact Aware Neural Abstractive Summarization , 2017, AAAI.
[6] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[7] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[8] Ryan McDonald,et al. On Faithfulness and Factuality in Abstractive Summarization , 2020, ACL.
[9] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[10] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[11] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[12] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[13] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[14] Dragomir R. Radev,et al. Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model , 2019, ACL.
[15] Alexander M. Rush,et al. Abstractive Sentence Summarization with Attentive Recurrent Neural Networks , 2016, NAACL.
[16] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[17] Navdeep Jaitly,et al. Pointer Networks , 2015, NIPS.
[18] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[19] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[20] Regina Barzilay,et al. Automatic Fact-guided Sentence Modification , 2020, AAAI.
[21] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[22] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[23] Ashish Agarwal,et al. Hallucinations in Neural Machine Translation , 2018 .
[24] Mona T. Diab,et al. FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization , 2020, ACL.
[25] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[26] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[27] Ben Goodrich,et al. Assessing The Factual Accuracy of Generated Text , 2019, KDD.
[28] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[29] Chenguang Zhu,et al. Boosting Factual Correctness of Abstractive Summarization with Knowledge Graph , 2020, ArXiv.
[30] Christopher D. Manning,et al. Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.
[31] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[32] Haoran Li,et al. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization , 2018, COLING.
[33] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[34] Philipp Koehn,et al. Six Challenges for Neural Machine Translation , 2017, NMT@ACL.
[35] Richard Socher,et al. Evaluating the Factual Consistency of Abstractive Text Summarization , 2019, EMNLP.
[36] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[37] Ido Dagan,et al. Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference , 2019, ACL.
[38] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[39] Alexander M. Rush,et al. Bottom-Up Abstractive Summarization , 2018, EMNLP.
[40] Alex Wang,et al. Asking and Answering Questions to Evaluate the Factual Consistency of Summaries , 2020, ACL.