Learning to Faithfully Rationalize by Construction
暂无分享,去创建一个
Byron C. Wallace | Sarthak Jain | Yuval Pinter | Sarah Wiegreffe | Yuval Pinter | Sarah Wiegreffe | Sarthak Jain
[1] Yuval Pinter,et al. Attention is not not Explanation , 2019, EMNLP.
[2] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[3] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[4] Luke S. Zettlemoyer,et al. AllenNLP: A Deep Semantic Natural Language Processing Platform , 2018, ArXiv.
[5] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[6] Dan Roth,et al. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences , 2018, NAACL.
[7] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[8] Christine D. Piatko,et al. Using “Annotator Rationales” to Improve Machine Learning for Text Categorization , 2007, NAACL.
[9] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[10] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[11] Tommi S. Jaakkola,et al. Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control , 2019, EMNLP.
[12] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[13] Roy Schwartz,et al. Show Your Work: Improved Reporting of Experimental Results , 2019, EMNLP.
[14] Graham Neubig,et al. Learning to Deceive with Attention-Based Explanations , 2020, ACL.
[15] Xinlei Chen,et al. Visualizing and Understanding Neural Models in NLP , 2015, NAACL.
[16] Regina Barzilay,et al. Inferring Which Medical Treatments Work from Reports of Clinical Trials , 2019, NAACL.
[17] Ye Zhang,et al. Rationale-Augmented Convolutional Neural Networks for Text Classification , 2016, EMNLP.
[18] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[19] Jason Eisner,et al. Modeling Annotators: A Generative Approach to Learning from Annotator Rationales , 2008, EMNLP.
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[22] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[23] Kathleen McKeown,et al. Fine-grained Sentiment Analysis with Faithful Attention , 2019, ArXiv.
[24] Byron C. Wallace,et al. ERASER: A Benchmark to Evaluate Rationalized NLP Models , 2020, ACL.
[25] Ye Zhang,et al. Do Human Rationales Improve Machine Explanations? , 2019, BlackboxNLP@ACL.
[26] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[27] Iz Beltagy,et al. SciBERT: A Pretrained Language Model for Scientific Text , 2019, EMNLP.
[28] Bernease Herman,et al. The Promise and Peril of Human Evaluation for Model Interpretability , 2017, ArXiv.
[29] Carla E. Brodley,et al. The Constrained Weight Space SVM: Learning with Ranked Features , 2011, ICML.
[30] Mark O. Riedl,et al. Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations , 2017, AIES.
[31] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[32] Yang Liu,et al. On Identifiability in Transformers , 2020, ICLR.
[33] Ivan Titov,et al. Interpretable Neural Predictions with Differentiable Binary Variables , 2019, ACL.
[34] Shi Feng,et al. Pathologies of Neural Models Make Interpretations Difficult , 2018, EMNLP.
[35] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[36] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[37] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[38] Francesco Romani,et al. Ranking a stream of news , 2005, WWW '05.
[39] Francesca Toni,et al. Human-grounded Evaluations of Explanation Methods for Text Classification , 2019, EMNLP.
[40] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[41] Maria Leonor Pacheco,et al. of the Association for Computational Linguistics: , 2001 .
[42] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.