Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals
暂无分享,去创建一个
[1] Yoav Goldberg,et al. Adversarial Removal of Demographic Attributes from Text Data , 2018, EMNLP.
[2] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[3] Robert Frank,et al. Open Sesame: Getting inside BERT’s Linguistic Knowledge , 2019, BlackboxNLP@ACL.
[4] Leyang Cui,et al. Evaluating Commonsense in Pre-trained Language Models , 2019, AAAI.
[5] Yonatan Belinkov,et al. Linguistic Knowledge and Transferability of Contextual Representations , 2019, NAACL.
[6] Anna Rumshisky,et al. A Primer in BERTology: What We Know About How BERT Works , 2020, Transactions of the Association for Computational Linguistics.
[7] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[8] Yonatan Belinkov,et al. Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias , 2020, ArXiv.
[9] François Laviolette,et al. Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..
[10] Yonatan Belinkov,et al. Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance? , 2020, EACL.
[11] Yash Goyal,et al. Explaining Classifiers with Causal Concept Effect (CaCE) , 2019, ArXiv.
[12] Aleksandra Gabryszak,et al. Probing Linguistic Features of Sentence-Level Representations in Relation Extraction , 2020, ACL.
[13] Yoav Goldberg,et al. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection , 2020, ACL.
[14] Florian Mohnert,et al. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information , 2018, BlackboxNLP@EMNLP.
[15] Dipanjan Das,et al. BERT Rediscovers the Classical NLP Pipeline , 2019, ACL.
[16] J. Pearl,et al. The Book of Why: The New Science of Cause and Effect , 2018 .
[17] John Hewitt,et al. Designing and Interpreting Probes with Control Tasks , 2019, EMNLP.
[18] Martin Wattenberg,et al. Visualizing and Measuring the Geometry of BERT , 2019, NeurIPS.
[19] Christopher D. Manning,et al. A Structural Probe for Finding Syntax in Word Representations , 2019, NAACL.
[20] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[21] Rowan Hall Maudslay,et al. Information-Theoretic Probing for Linguistic Structure , 2020, ACL.
[22] Yoav Goldberg,et al. Assessing BERT's Syntactic Abilities , 2019, ArXiv.
[23] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[24] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[25] Eduard Hovy,et al. Learning the Difference that Makes a Difference with Counterfactually-Augmented Data , 2020, ICLR.
[26] Shikha Bordia,et al. Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs , 2019, EMNLP.
[27] Noah Goodman,et al. Investigating Transferability in Pretrained Language Models , 2020, EMNLP.
[28] Uri Shalit,et al. CausaLM: Causal Model Explanation Through Counterfactual Language Models , 2020, CL.
[29] Benjamin Van Durme,et al. Probing Neural Language Models for Human Tacit Assumptions , 2020, CogSci.
[30] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[31] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[32] Guillaume Lample,et al. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.
[33] J. Rissanen,et al. Modeling By Shortest Data Description* , 1978, Autom..
[34] Ivan Titov,et al. Information-Theoretic Probing with Minimum Description Length , 2020, EMNLP.
[35] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[36] Joakim Nivre,et al. Universal Dependency Annotation for Multilingual Parsing , 2013, ACL.
[37] Graham Neubig,et al. How Can We Know What Language Models Know? , 2019, Transactions of the Association for Computational Linguistics.
[38] Jonathan Berant,et al. oLMpics-On What Language Model Pre-training Captures , 2019, Transactions of the Association for Computational Linguistics.
[39] Yejin Choi,et al. Do Neural Language Representations Learn Physical Commonsense? , 2019, CogSci.
[40] Sara Veldhoen,et al. Visualisation and 'Diagnostic Classifiers' Reveal How Recurrent and Recursive Neural Networks Process Hierarchical Structure , 2018, J. Artif. Intell. Res..
[41] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.