Explain and Predict, and then Predict Again
暂无分享,去创建一个
[1] Nitesh V. Chawla,et al. SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..
[2] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[3] Rich Caruana,et al. Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.
[4] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[5] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.
[6] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[7] Avishek Anand,et al. TableNet: An Approach for Determining Fine-grained Relations for Wikipedia Tables , 2019, WWW.
[8] Bo Pang,et al. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts , 2004, ACL.
[9] Wolfgang Nejdl,et al. Exploring Web Archives Through Temporal Anchor Texts , 2017, WebSci.
[10] Benjamin Schrauwen,et al. Training and Analysing Deep Recurrent Neural Networks , 2013, NIPS.
[11] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[12] Fei-Fei Li,et al. Visualizing and Understanding Recurrent Networks , 2015, ArXiv.
[13] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[14] Dan Roth,et al. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences , 2018, NAACL.
[15] Mihaela van der Schaar,et al. INVASE: Instance-wise Variable Selection using Neural Networks , 2018, ICLR.
[16] Zijian Zhang,et al. Dissonance Between Human and Machine Understanding , 2019, Proc. ACM Hum. Comput. Interact..
[17] Christine D. Piatko,et al. Using “Annotator Rationales” to Improve Machine Learning for Text Categorization , 2007, NAACL.
[18] Ting Liu,et al. Attention-over-Attention Neural Networks for Reading Comprehension , 2016, ACL.
[19] Ramón Fernández Astudillo,et al. From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification , 2016, ICML.
[20] Ivan Titov,et al. Interpretable Neural Predictions with Differentiable Binary Variables , 2019, ACL.
[21] Daniel Jurafsky,et al. Understanding Neural Networks through Representation Erasure , 2016, ArXiv.
[22] Kathleen McKeown,et al. Fine-grained Sentiment Analysis with Faithful Attention , 2019, ArXiv.
[23] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[24] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[25] Ye Zhang,et al. Rationale-Augmented Convolutional Neural Networks for Text Classification , 2016, EMNLP.
[26] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[27] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[28] Avishek Anand,et al. Model agnostic interpretability of rankers via intent modelling , 2020, FAT*.
[29] Regina Barzilay,et al. Inferring Which Medical Treatments Work from Reports of Clinical Trials , 2019, NAACL.
[30] Jason Eisner,et al. Modeling Annotators: A Generative Approach to Learning from Annotator Rationales , 2008, EMNLP.
[31] Andreas Vlachos,et al. FEVER: a Large-scale Dataset for Fact Extraction and VERification , 2018, NAACL.
[32] Ye Zhang,et al. Do Human Rationales Improve Machine Explanations? , 2019, BlackboxNLP@ACL.
[33] Wolfgang Nejdl,et al. The Dawn of today's popular domains: A study of the archived German Web over 18 years , 2016, 2016 IEEE/ACM Joint Conference on Digital Libraries (JCDL).
[34] Diyi Yang,et al. Hierarchical Attention Networks for Document Classification , 2016, NAACL.
[35] Byron C. Wallace,et al. ERASER: A Benchmark to Evaluate Rationalized NLP Models , 2020, ACL.
[36] Mirella Lapata,et al. Long Short-Term Memory-Networks for Machine Reading , 2016, EMNLP.
[37] Wolfgang Nejdl,et al. Expedition: A Time-Aware Exploratory Search System Designed for Scholars , 2016, SIGIR.
[38] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[39] Yuval Pinter,et al. Attention is not not Explanation , 2019, EMNLP.
[40] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[41] Ming Yang,et al. Entity recognition from clinical texts via recurrent neural network , 2017, BMC Medical Informatics and Decision Making.