Evaluating saliency map explanations for convolutional neural networks: a user study
暂无分享,去创建一个
Enrico Costanza | Nadia Bianchi-Berthouze | Martin Schuessler | Ahmed Alqaraawi | Philipp Weiß | Enrico Costanza | N. Bianchi-Berthouze | M. Schuessler | Ahmed Alqaraawi | Philipp Weiß
[1] P. Mayring. Qualitative content analysis: theoretical foundation, basic procedures and software solution , 2014 .
[2] Richard E. Mayer,et al. Instructional Manipulation of Users' Mental Models for Electronic Calculators , 1984, Int. J. Man Mach. Stud..
[3] Tim O'Shea,et al. The black box inside the glass box: presenting computing concepts to novices , 1999, Int. J. Hum. Comput. Stud..
[4] Mark W. Newman,et al. Learning from a learning thermostat: lessons for intelligent systems for the home , 2013, UbiComp.
[5] Jure Leskovec,et al. Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.
[6] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[7] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[8] Nasser M. Nasrabadi,et al. Pattern Recognition and Machine Learning , 2006, Technometrics.
[9] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[10] Trevor Darrell,et al. Generating Visual Explanations , 2016, ECCV.
[11] Alan Dix. Human issues in the use of pattern recognition techniques , 1992 .
[12] Lora Aroyo,et al. The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.
[13] Michael Gygli,et al. Predicting When Saliency Maps are Accurate and Eye Fixations Consistent , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Cynthia Rudin,et al. The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification , 2014, NIPS.
[15] David E. Kieras,et al. The Role of a Mental Model in Learning to Operate a Device , 1990, Cogn. Sci..
[16] G. Corrado,et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography , 2019, Nature Medicine.
[17] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[18] Luc Van Gool,et al. The Pascal Visual Object Classes Challenge: A Retrospective , 2014, International Journal of Computer Vision.
[19] Vivian Lai,et al. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection , 2018, FAT.
[20] Wanda Pratt,et al. Transparent Queries: investigation users' mental models of search engines , 2001, SIGIR '01.
[21] Ming Yin,et al. Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.
[22] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[23] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[24] du BoulayBenedict,et al. The black box inside the glass box , 1999 .
[25] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[26] David E. Kieras,et al. The Role of a Mental Model in Learning to Operate a Device. , 1984 .
[27] P. Perona,et al. What do we perceive in a glance of a real-world scene? , 2007, Journal of vision.
[28] Mei-Ling Shyu,et al. A Survey on Deep Learning , 2018, ACM Comput. Surv..
[29] H. White. A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity , 1980 .
[30] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[31] Raja Chatila,et al. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems , 2019, Robotics and Well-Being.
[32] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[33] Weng-Keen Wong,et al. Fixing the program my computer learned: barriers for end users, challenges for the machine , 2009, IUI.
[34] Zachary C. Lipton,et al. The mythos of model interpretability , 2018, Commun. ACM.
[35] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[36] Joe Tullio,et al. How it works: a field study of non-technical users interacting with an intelligent system , 2007, CHI.
[37] Thomas Brox,et al. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks , 2016, NIPS.
[38] Madelyn Sanfilippo,et al. AI Now 2017 Report , 2017 .
[39] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[40] Shi Feng,et al. What can AI do for me?: evaluating machine learning interpretations in cooperative play , 2019, IUI.
[41] Anind K. Dey,et al. Support for context-aware intelligibility and control , 2009, CHI.
[42] Sameer Singh,et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.
[43] Pramodita Sharma. 2012 , 2013, Les 25 ans de l’OMC: Une rétrospective en photos.
[44] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[45] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[46] Frédo Durand,et al. What Do Different Evaluation Metrics Tell Us About Saliency Models? , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[47] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[48] Sarvapali D. Ramchurn,et al. Tariff Agent , 2016, ACM Trans. Comput. Hum. Interact..
[49] Steve Whittaker,et al. Progressive disclosure: empirically motivated approaches to designing effective transparency , 2019, IUI.
[50] Jianguo Zhang,et al. The PASCAL Visual Object Classes Challenge , 2006 .
[51] Regina A. Pomranky,et al. The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..
[52] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[53] René F. Kizilcec. How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.
[54] Martin Wattenberg,et al. TCAV: Relative concept importance testing with Linear Concept Activation Vectors , 2018 .
[55] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[56] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.
[57] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[58] Martin Schuessler,et al. Minimalistic Explanations: Capturing the Essence of Decisions , 2019, CHI Extended Abstracts.
[59] Cordelia Schmid,et al. The 2005 PASCAL Visual Object Classes Challenge , 2005, MLCW.
[60] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[61] Carrie J. Cai,et al. The effects of example-based explanations in a machine learning interface , 2019, IUI.
[62] Andrea L. Bertozzi,et al. Randomized Controlled Field Trials of Predictive Policing , 2015 .
[63] Nicholas J. Belkin,et al. A case for interaction: a study of interactive information retrieval behavior and effectiveness , 1996, CHI.
[64] Todd Kulesza,et al. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.
[65] Ben Shneiderman,et al. Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight , 2016, Proceedings of the National Academy of Sciences.