NLQuAD: A Non-Factoid Long Question Answering Data Set
暂无分享,去创建一个
[1] M. de Rijke,et al. Conversations with Documents: An Exploration of Document-Centered Assistance , 2020, CHIIR.
[2] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[3] W. Bruce Croft,et al. End to End Long Short Term Memory Networks for Non-Factoid Question Answering , 2016, ICTIR.
[4] Gabriel Stanovsky,et al. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs , 2019, NAACL.
[5] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[6] Oren Etzioni,et al. Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge , 2018, ArXiv.
[7] Liu Yang,et al. Long Range Arena: A Benchmark for Efficient Transformers , 2020, ICLR.
[8] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[9] Arman Cohan,et al. Longformer: The Long-Document Transformer , 2020, ArXiv.
[10] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[11] Chin-Yew Lin,et al. Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics , 2004, ACL.
[12] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[13] W. Bruce Croft,et al. ANTIQUE: A Non-factoid Question Answering Benchmark , 2019, ECIR.
[14] Yejin Choi,et al. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference , 2018, EMNLP.
[15] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[16] Yejin Choi,et al. Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning , 2019, EMNLP.
[17] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[18] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[19] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[20] Jason Weston,et al. ELI5: Long Form Question Answering , 2019, ACL.
[21] Mitesh M. Khapra,et al. DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension , 2018, ACL.
[22] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[23] Andrew Trotman,et al. Improvements to BM25 and Language Models Examined , 2014, ADCS.
[24] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[25] Xinyan Xiao,et al. DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications , 2017, QA@ACL.
[26] Yoshua Bengio,et al. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering , 2018, EMNLP.
[27] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[28] Chris Dyer,et al. The NarrativeQA Reading Comprehension Challenge , 2017, TACL.
[29] Fabio Crestani,et al. Longformer for MS MARCO Document Re-ranking Task , 2020, ArXiv.
[30] Yi Tay,et al. Efficient Transformers: A Survey , 2020, ArXiv.