Attention is not not Explanation
暂无分享,去创建一个
[1] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[2] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[3] Abeed Sarker,et al. Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features , 2015, J. Am. Medical Informatics Assoc..
[4] Cynthia Rudin,et al. Please Stop Explaining Black Box Models for High Stakes Decisions , 2018, ArXiv.
[5] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[6] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[7] Maria Leonor Pacheco,et al. of the Association for Computational Linguistics: , 2001 .
[8] Mark O. Riedl. Human-Centered Artificial Intelligence and Machine Learning , 2019, Human Behavior and Emerging Technologies.
[9] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[10] Phil Blunsom,et al. Reasoning about Entailment with Neural Attention , 2015, ICLR.
[11] Jimeng Sun,et al. Explainable Prediction of Medical Codes from Clinical Text , 2018, NAACL.
[12] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[13] Andreas Vlachos,et al. Generating Token-Level Explanations for Natural Language Inference , 2019, NAACL.
[14] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[15] Peter Szolovits,et al. MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.
[16] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[17] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[18] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[19] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[20] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.