SQuAD2-CR: Semi-supervised Annotation for Cause and Rationales for Unanswerability in SQuAD 2.0
暂无分享,去创建一个
[1] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[2] Li Deng,et al. Question-Answering with Grammatically-Interpretable Representations , 2017, AAAI.
[3] Jeffrey Heer,et al. Errudite: Scalable, Reproducible, and Testable Error Analysis , 2019, ACL.
[4] Sameer Singh,et al. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models , 2019, EMNLP.
[5] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[6] Yang Liu,et al. U-Net: Machine Reading Comprehension with Unanswerable Questions , 2018, ArXiv.
[7] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[8] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[9] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[10] Zhi-Hua Zhou,et al. Tri-training: exploiting unlabeled data using three classifiers , 2005, IEEE Transactions on Knowledge and Data Engineering.
[11] Joakim Nivre,et al. An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation , 2018, WMT.
[12] Yuval Pinter,et al. Attention is not not Explanation , 2019, EMNLP.
[13] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[14] Rico Sennrich,et al. How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs , 2016, EACL.
[15] Barbara Plank,et al. Strong Baselines for Neural Semi-Supervised Learning under Domain Shift , 2018, ACL.
[16] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[17] Omer Levy,et al. Zero-Shot Relation Extraction via Reading Comprehension , 2017, CoNLL.
[18] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[19] Furu Wei,et al. Learning to Ask Unanswerable Questions for Machine Reading Comprehension , 2019, ACL.
[20] David Vandyke,et al. Counter-fitting Word Vectors to Linguistic Constraints , 2016, NAACL.
[21] Felix Hill,et al. SimLex-999: Evaluating Semantic Models With (Genuine) Similarity Estimation , 2014, CL.
[22] Mark Yatskar,et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC , 2018, NAACL.
[23] Yue Zhang,et al. Attention Modeling for Targeted Sentiment , 2017, EACL.
[24] Dhruv Batra,et al. Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions? , 2016, EMNLP.
[25] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.
[26] Graeme Hirst,et al. Computing Word-Pair Antonymy , 2008, EMNLP.
[27] Seung-won Hwang,et al. QADiver: Interactive Framework for Diagnosing QA Models , 2019, AAAI.
[28] Furu Wei,et al. Read + Verify: Machine Reading Comprehension with Unanswerable Questions , 2018, AAAI.
[29] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.