XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages
暂无分享,去创建一个
Rifat Shahriyar | Tahmid Hasan | Abhik Bhattacharjee | Kazi Samin | Yong-Bin Kang | Md Saiful Islam | Md. Saiful Islam | Md Saiful Islam | Yuan-Fang Li | M. Sohel Rahman | M. Rahman | Yuan-Fang Li | Rifat Shahriyar | Yong-Bin Kang | Abhik Bhattacharjee | Tahmid Hasan | Kazi Samin | M. Rahman | M. S. Rahman | M. Rahman | Kazi Samin Mubasshir
[1] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[2] Shashi Narayan,et al. Leveraging Pre-trained Checkpoints for Sequence Generation Tasks , 2019, Transactions of the Association for Computational Linguistics.
[3] Alex Wang,et al. Asking and Answering Questions to Evaluate the Factual Consistency of Summaries , 2020, ACL.
[4] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[5] Min Sun,et al. A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss , 2018, ACL.
[6] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[7] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[8] Mirella Lapata,et al. Sentence Compression Beyond Word Deletion , 2008, COLING.
[9] Gerard de Melo,et al. Commonsense Knowledge in Machine Intelligence , 2018, SGMD.
[10] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[11] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[12] Yann Dauphin,et al. Convolutional Sequence to Sequence Learning , 2017, ICML.
[13] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[14] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[15] Mor Naaman,et al. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies , 2018, NAACL.
[16] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[17] Taku Kudo,et al. Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates , 2018, ACL.
[18] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[19] Christopher D. Manning,et al. Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.
[20] Ankur Bapna,et al. Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges , 2019, ArXiv.
[21] Colin Raffel,et al. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer , 2021, NAACL.
[22] Udo Kruschwitz,et al. MultiLing 2015: Multilingual Summarization of Single and Multi-Documents, On-line Fora, and Call-center Conversations , 2015, SIGDIAL Conference.
[23] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[24] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[25] Yonatan Belinkov,et al. Synthetic and Natural Noise Both Break Neural Machine Translation , 2017, ICLR.
[26] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[27] Ani Nenkova,et al. A Survey of Text Summarization Techniques , 2012, Mining Text Data.
[28] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[29] Jiajun Zhang,et al. NCLS: Neural Cross-Lingual Summarization , 2019, EMNLP.
[30] Paul O'Leary McCann,et al. fugashi, a Tool for Tokenizing Japanese in Python , 2020, NLPOSS.
[31] M. Maybury,et al. Automatic Summarization , 2002, Computational Linguistics.
[32] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[33] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[34] Abdul Syukur,et al. Review of automatic text summarization techniques & methods , 2020, J. King Saud Univ. Comput. Inf. Sci..
[35] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[36] Ming Zhou,et al. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training , 2020, FINDINGS.
[37] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[38] Ryan McDonald,et al. On Faithfulness and Factuality in Abstractive Summarization , 2020, ACL.
[39] Claire Cardie,et al. Intrinsic Evaluation of Summarization Datasets , 2020, EMNLP.
[40] Sylvain Lamprier,et al. MLSUM: The Multilingual Summarization Corpus , 2020, EMNLP.
[41] Mirella Lapata,et al. Text Summarization with Pretrained Encoders , 2019, EMNLP.
[42] Xiaojun Wan,et al. MultiSumm: Towards a Unified Model for Multi-Lingual Abstractive Summarization , 2020, AAAI.
[43] Noam Shazeer,et al. Adafactor: Adaptive Learning Rates with Sublinear Memory Cost , 2018, ICML.
[44] Wei Lu,et al. Integrating Machine Learning with Human Knowledge , 2020, iScience.