暂无分享,去创建一个
Yonatan Belinkov | Zachary M. Ziegler | Dimion Asael | Zachary Ziegler | Yonatan Belinkov | Dimion Asael
[1] Lei Zheng,et al. Texygen: A Benchmarking Platform for Text Generation Models , 2018, SIGIR.
[2] Mike Lewis,et al. Generative Question Answering: Learning to Answer the Whole Question , 2018, ICLR.
[3] R. Thomas McCoy,et al. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.
[4] Zachary C. Lipton,et al. How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks , 2018, EMNLP.
[5] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[6] Dhruv Batra,et al. Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[7] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[8] Yonatan Belinkov,et al. On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference , 2019, *SEMEVAL.
[9] Omer Levy,et al. Annotation Artifacts in Natural Language Inference Data , 2018, NAACL.
[10] Yonatan Belinkov,et al. Learning from others' mistakes: Avoiding dataset biases without modeling them , 2020, ICLR.
[11] Pasquale Minervini,et al. There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training , 2020, EMNLP.
[12] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[13] Iryna Gurevych,et al. Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance , 2020, ACL.
[14] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[15] Bilal Alsallakh,et al. Captum: A unified and generic model interpretability library for PyTorch , 2020, ArXiv.
[16] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[17] Rachel Rudinger,et al. Hypothesis Only Baselines in Natural Language Inference , 2018, *SEMEVAL.
[18] Thomas Lukasiewicz,et al. e-SNLI: Natural Language Inference with Natural Language Explanations , 2018, NeurIPS.
[19] Rico Sennrich,et al. Controlling Politeness in Neural Machine Translation via Side Constraints , 2016, NAACL.
[20] Masatoshi Tsuchiya,et al. Performance Impact Caused by Hidden Bias of Training Data for Recognizing Textual Entailment , 2018, LREC.
[21] Haohan Wang,et al. Unlearn Dataset Bias in Natural Language Inference by Fitting the Residual , 2019, EMNLP.
[22] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[23] Regina Barzilay,et al. Towards Debiasing Fact Verification Models , 2019, EMNLP.
[24] Danna Gurari,et al. Dataset bias: A case study for visual question answering , 2019, ASIST.
[25] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[26] Iryna Gurevych,et al. Towards Debiasing NLU Models from Unknown Biases , 2020, EMNLP.
[27] Yonatan Belinkov,et al. Don’t Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference , 2019, ACL.
[28] Yonatan Belinkov,et al. End-to-End Bias Mitigation by Modelling Biases in Corpora , 2020, ACL.