Improving named entity correctness of abstractive summarization by generative negative sampling
暂无分享,去创建一个
[1] Qingkai Zeng,et al. Enhancing Factual Consistency of Abstractive Summarization , 2021, NAACL.
[2] Ramesh Nallapati,et al. Improving Factual Consistency of Abstractive Summarization via Question Answering , 2021, ACL.
[3] Artidoro Pagnoni,et al. Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics , 2021, NAACL.
[4] Ramesh Nallapati,et al. Entity-level Factual Consistency of Abstractive Text Summarization , 2021, EACL.
[5] Yejin Choi,et al. GO FIGURE: A Meta Evaluation of Factuality in Summarization , 2020, FINDINGS.
[6] Colin Raffel,et al. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer , 2020, NAACL.
[7] J. C. Cheung,et al. Factual Error Correction for Abstractive Summarization Models , 2020, EMNLP.
[8] Jackie Chi Kit Cheung,et al. Multi-Fact Correction in Abstractive Text Summarization , 2020, EMNLP.
[9] Wanxiang Che,et al. N-LTP: An Open-source Neural Language Technology Platform for Chinese , 2020, EMNLP.
[10] Ryan McDonald,et al. On Faithfulness and Factuality in Abstractive Summarization , 2020, ACL.
[11] Alex Wang,et al. Asking and Answering Questions to Evaluate the Factual Consistency of Summaries , 2020, ACL.
[12] Christopher D. Manning,et al. Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports , 2019, ACL.
[13] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[14] Richard Socher,et al. Evaluating the Factual Consistency of Abstractive Text Summarization , 2019, EMNLP.
[15] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[16] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[17] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[18] Ben Goodrich,et al. Assessing The Factual Accuracy of Generated Text , 2019, KDD.
[19] Ido Dagan,et al. Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference , 2019, ACL.
[20] Haoran Li,et al. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization , 2018, COLING.
[21] Richard Socher,et al. Improving Abstraction in Text Summarization , 2018, EMNLP.
[22] Ramakanth Pasunuru,et al. Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation , 2018, ACL.
[23] Furu Wei,et al. Faithful to the Original: Fact Aware Neural Abstractive Summarization , 2017, AAAI.
[24] Xiaojun Wan,et al. Overview of the NLPCC 2017 Shared Task: Single Document Summarization , 2017, NLPCC.
[25] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[26] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[27] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[28] Jamshid Bagherzadeh,et al. An Evaluation of Two-Step Techniques for Positive-Unlabeled Learning in Text Classification , 2014 .
[29] Charles Elkan,et al. Learning classifiers from only positive and unlabeled data , 2008, KDD.
[30] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[31] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[32] I. Alsmadi,et al. Deep reinforcement and transfer learning for abstractive text summarization: A review , 2022, Comput. Speech Lang..
[33] C. Pal,et al. On Extractive and Abstractive Neural Document Summarization with Transformer Language Models , 2020, EMNLP.
[34] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[35] J. Fleiss. Measuring nominal scale agreement among many raters. , 1971 .