暂无分享,去创建一个
Nai Ding | Cheng Luo | Peiqing Jin | Xunyi Pan | Jiajie Zou | Yuran Zhang
[1] Shuohang Wang,et al. What does BERT Learn from Multiple-Choice Reading Comprehension Datasets? , 2019, ArXiv.
[2] Wentao Ma,et al. Benchmarking Robustness of Machine Reading Comprehension Models , 2021, FINDINGS.
[3] Roger Levy,et al. STARC: Structured Annotations for Reading Comprehension , 2020, ACL.
[4] Byron C. Wallace,et al. ERASER: A Benchmark to Evaluate Rationalized NLP Models , 2020, ACL.
[5] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[6] Ye Zhang,et al. Rationale-Augmented Convolutional Neural Networks for Text Classification , 2016, EMNLP.
[7] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[8] Christine D. Piatko,et al. Using “Annotator Rationales” to Improve Machine Learning for Text Categorization , 2007, NAACL.
[9] Omer Levy,et al. Annotation Artifacts in Natural Language Inference Data , 2018, NAACL.