A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
暂无分享,去创建一个
[1] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[2] Michael Carl Tschantz,et al. Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..
[3] Kristina Lerman,et al. A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..
[4] Catherine Plaisant,et al. The challenge of information visualization evaluation , 2004, AVI.
[5] Gary Klein,et al. Improving Users' Mental Models of Intelligent Software Tools , 2011, IEEE Intelligent Systems.
[6] Jaegul Choo,et al. iVisClassifier: An interactive visual analytics system for classification based on supervised dimension reduction , 2010, 2010 IEEE Symposium on Visual Analytics Science and Technology.
[7] Mennatallah El-Assady,et al. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning , 2019, IEEE Transactions on Visualization and Computer Graphics.
[8] Mariusz Bojarski,et al. VisualBackProp: Efficient Visualization of CNNs for Autonomous Driving , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[9] William J. Clancey,et al. Explaining Explanation, Part 4: A Deep Dive on Deep Nets , 2018, IEEE Intelligent Systems.
[10] K. Karahalios,et al. "I always assumed that I wasn't really that close to [her]": Reasoning about Invisible Algorithms in News Feeds , 2015, CHI.
[11] Simone Stumpf,et al. Explaining Smart Heating Systems to Discourage Fiddling with Optimized Behavior , 2018, IUI Workshops.
[12] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[13] Lee Lacy,et al. Defense Advanced Research Projects Agency (DARPA) Agent Markup Language Computer Aided Knowledge Acquisition , 2005 .
[14] Todd Kulesza,et al. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.
[15] Alex Endert,et al. 7 key challenges for visualization in cyber network defense , 2014, VizSEC.
[16] Mike Wu,et al. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability , 2017, AAAI.
[17] Eric D. Ragan,et al. A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning , 2018, ArXiv.
[18] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[19] Colin M. Gray,et al. The Dark (Patterns) Side of UX Design , 2018, CHI.
[20] Gary Klein,et al. Explaining Explanation, Part 1: Theoretical Foundations , 2017, IEEE Intelligent Systems.
[21] Margaret M. Burnett,et al. Toward Foraging for Understanding of StarCraft Agents: An Empirical Study , 2017, IUI.
[22] Eric D. Ragan,et al. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems , 2019, HCOMP.
[23] Alex Pentland,et al. Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.
[24] Jichen Zhu,et al. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).
[25] Anind K. Dey,et al. Assessing demand for intelligibility in context-aware applications , 2009, UbiComp.
[26] Matteo Turilli,et al. The ethics of information transparency , 2009, Ethics and Information Technology.
[27] Jeffrey M. Bradshaw,et al. Trust in Automation , 2013, IEEE Intelligent Systems.
[28] Cynthia Rudin,et al. Falling Rule Lists , 2014, AISTATS.
[29] Jordan L. Boyd-Graber,et al. Interactive topic modeling , 2014, ACL.
[30] Li Chen,et al. Trust building with explanation interfaces , 2006, IUI '06.
[31] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[32] Heinrich Hußmann,et al. The Impact of Placebic Explanations on Trust in Intelligent Systems , 2019, CHI Extended Abstracts.
[33] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[34] Izak Benbasat,et al. Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice , 1999, MIS Q..
[35] Mike Ananny,et al. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..
[36] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[37] Raquel Flórez López,et al. Enhancing accuracy and interpretability of ensemble strategies in credit risk assessment. A correlated-adjusted decision forest proposal , 2015, Expert Syst. Appl..
[38] Zhen Li,et al. Towards Better Analysis of Deep Convolutional Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.
[39] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[40] Jeffrey Heer,et al. Agency plus automation: Designing artificial intelligence into interactive systems , 2019, Proceedings of the National Academy of Sciences.
[41] Koray Kavukcuoglu,et al. Visual Attention , 2020, Computational Models for Cognitive Vision.
[42] Alex Groce,et al. You Are the Only Possible Oracle: Effective Test Selection for End Users of Interactive Machine Learning Systems , 2014, IEEE Transactions on Software Engineering.
[43] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[44] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[45] Karin Coninx,et al. PervasiveCrystal: Asking and Answering Why and Why Not Questions about Pervasive Computing Applications , 2010, 2010 Sixth International Conference on Intelligent Environments.
[46] Minsuk Kahng,et al. FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning , 2019, 2019 IEEE Conference on Visual Analytics Science and Technology (VAST).
[47] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[48] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[49] Bonnie M. Muir,et al. Trust Between Humans and Machines, and the Design of Decision Aids , 1987, Int. J. Man Mach. Stud..
[50] Emilee J. Rader,et al. Explanations as Mechanisms for Supporting Algorithmic Transparency , 2018, CHI.
[51] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[52] Minsuk Kahng,et al. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers , 2018, IEEE Transactions on Visualization and Computer Graphics.
[53] Martin Wattenberg,et al. Direct-Manipulation Visualization of Deep Networks , 2017, ArXiv.
[54] Judith Masthoff,et al. Designing and Evaluating Explanations for Recommender Systems , 2011, Recommender Systems Handbook.
[55] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[56] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[57] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[58] Jo Vermeulen,et al. From today's augmented houses to tomorrow's smart homes: new directions for home automation research , 2014, UbiComp.
[59] Andrea Bunt,et al. Are explanations always important?: a study of deployed, low-cost intelligent interactive systems , 2012, IUI '12.
[60] Marko Bohanec,et al. Perturbation-Based Explanations of Prediction Models , 2018, Human and Machine Learning.
[61] Mark Bilandzic,et al. Bringing Transparency Design into Practice , 2018, IUI.
[62] Dan Conway,et al. How to Recommend?: User Trust Factors in Movie Recommender Systems , 2017, IUI.
[63] T. Lombrozo. Explanation and categorization: How “why?” informs “what?” , 2009, Cognition.
[64] Stefan N. Groesser,et al. A comprehensive method for comparing mental models of dynamic systems , 2011, Eur. J. Oper. Res..
[65] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[66] Gary Klein,et al. Explaining Explanation, Part 2: Empirical Foundations , 2017, IEEE Intelligent Systems.
[67] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[68] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[69] Daniel S. Weld,et al. The challenge of crafting intelligible intelligence , 2018, Commun. ACM.
[70] Yu-Ru Lin,et al. FairSight: Visual Analytics for Fairness in Decision Making , 2019, IEEE Transactions on Visualization and Computer Graphics.
[71] Shagun Jhaver,et al. Algorithmic Anxiety and Coping Strategies of Airbnb Hosts , 2018, CHI.
[72] Samuel C. Woolley,et al. Automating power: Social bot interference in global politics , 2016, First Monday.
[73] Baining Guo,et al. TopicPanorama: A full picture of relevant topics , 2014, IEEE VAST.
[74] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[75] Tamara Munzner,et al. A Nested Model for Visualization Design and Validation , 2009, IEEE Transactions on Visualization and Computer Graphics.
[76] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[77] Gautham J. Mysore,et al. An Efficient Posterior Regularized Latent Variable Model for Interactive Sound Source Separation , 2013, ICML.
[78] E. Langer,et al. The Mindlessness of Ostensibly Thoughtful Action: The Role of "Placebic" Information in Interpersonal Interaction , 1978 .
[79] Brad A. Myers,et al. Answering why and why not questions in user interfaces , 2006, CHI.
[80] Philip N. Howard,et al. Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum , 2016, ArXiv.
[81] Min Kyung Lee,et al. Procedural Justice in Algorithmic Fairness , 2019, Proc. ACM Hum. Comput. Interact..
[82] Alex Endert,et al. The State of the Art in Integrating Machine Learning into Visual Analytics , 2017, Comput. Graph. Forum.
[83] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[84] Johannes Kraus,et al. The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency , 2019, Hum. Factors.
[85] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[86] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[87] Madeleine Udell,et al. Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved , 2018, FAT.
[88] Eric Horvitz,et al. Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure , 2018, HCOMP.
[89] Anind K. Dey,et al. Support for context-aware intelligibility and control , 2009, CHI.
[90] Mouzhi Ge,et al. How should I explain? A comparison of different explanation types for recommender systems , 2014, Int. J. Hum. Comput. Stud..
[91] Michael Chromik,et al. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems , 2019, IUI Workshops.
[92] Per Ola Kristensson,et al. A Review of User Interface Design for Interactive Machine Learning , 2018, ACM Trans. Interact. Intell. Syst..
[93] Michael Gleicher,et al. Task-Driven Comparison of Topic Models , 2016, IEEE Transactions on Visualization and Computer Graphics.
[94] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[95] Yun Fu,et al. Tell Me Where to Look: Guided Attention Inference Network , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[96] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[97] David Weinberger,et al. Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.
[98] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[99] Margaret M. Burnett,et al. What Should Be in an XAI Explanation? What IFT Reveals , 2018, IUI Workshops.
[100] K. Mueller,et al. Evolutionary Visual Analysis of Deep Neural Networks , 2017 .
[101] Jeroen van den Hoven,et al. Breaking the filter bubble: democracy and design , 2015, Ethics and Information Technology.
[102] Karrie Karahalios,et al. "Be Careful; Things Can Be Worse than They Appear": Understanding Biased Algorithms and Users' Behavior Around Them in Rating Platforms , 2017, ICWSM.
[103] Shie Mannor,et al. Graying the black box: Understanding DQNs , 2016, ICML.
[104] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[105] James Zou,et al. Towards Automatic Concept-based Explanations , 2019, NeurIPS.
[106] Wolfgang Minker,et al. Probabilistic Human-Computer Trust Handling , 2014, SIGDIAL Conference.
[107] Alexander Binder,et al. Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[108] Alexander M. Rush,et al. LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.
[109] Thomas G. Dietterich,et al. Interacting meaningfully with machine learning systems: Three experiments , 2009, Int. J. Hum. Comput. Stud..
[110] Qingquan Song,et al. Towards Explanation of DNN-based Prediction with Guided Feature Inversion , 2018, KDD.
[111] Huamin Qu,et al. RuleMatrix: Visualizing and Understanding Classifiers with Rules , 2018, IEEE Transactions on Visualization and Computer Graphics.
[112] Adrian Weller,et al. Challenges for Transparency , 2017, ArXiv.
[113] Colin G. Drury,et al. Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .
[114] Enrico Bertini,et al. INFUSE: Interactive Feature Selection for Predictive Modeling of High Dimensional Data , 2014, IEEE Transactions on Visualization and Computer Graphics.
[115] Yindalon Aphinyanagphongs,et al. A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations , 2017, 2017 IEEE Conference on Visual Analytics Science and Technology (VAST).
[116] Alex Endert,et al. Evaluating Interactive Graphical Encodings for Data Visualization , 2018, IEEE Transactions on Visualization and Computer Graphics.
[117] Melanie Tory,et al. Evaluating Visualizations: Do Expert Reviews Work? , 2005, IEEE Computer Graphics and Applications.
[118] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[119] Daniel A. Keim,et al. The Role of Uncertainty, Awareness, and Trust in Visual Analytics , 2016, IEEE Transactions on Visualization and Computer Graphics.
[120] Sean A. Munson,et al. When (ish) is My Bus?: User-centered Visualizations of Uncertainty in Everyday, Mobile Predictive Systems , 2016, CHI.
[121] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[122] Kristina Höök,et al. Steps to take before intelligent user interfaces become real , 2000, Interact. Comput..
[123] James J. Thomas,et al. Visualizing the non-visual: spatial analysis and interaction with information from text documents , 1995, Proceedings of Visualization 1995 Conference.
[124] Robert A. Bridges,et al. Situ: Identifying and Explaining Suspicious Behavior in Networks , 2019, IEEE Transactions on Visualization and Computer Graphics.
[125] Melanie Tory,et al. Human factors in visualization research , 2004, IEEE Transactions on Visualization and Computer Graphics.
[126] Jouni Markkula,et al. EU General Data Protection Regulation: Changes and implications for personal data collecting companies , 2017, Comput. Law Secur. Rev..
[127] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[128] Yunfeng Zhang,et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.
[129] Dhruv Batra,et al. Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions? , 2016, EMNLP.
[130] Jian Pei,et al. Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution , 2018, KDD.
[131] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[132] Jeffrey M. Bradshaw,et al. Myths of Automation, Part 2: Some Very Human Consequences , 2014, IEEE Intelligent Systems.
[133] Brent Mittelstadt,et al. Automation, Algorithms, and Politics| Auditing for Transparency in Content Personalization Systems , 2016 .
[134] Elmar Eisemann,et al. DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks , 2018, IEEE Transactions on Visualization and Computer Graphics.
[135] Qinying Liao,et al. An Uncertainty-Aware Approach for Exploratory Microblog Retrieval , 2015, IEEE Transactions on Visualization and Computer Graphics.
[136] Paul N. Bennett,et al. Guidelines for Human-AI Interaction , 2019, CHI.
[137] Rebecca Gray,et al. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.
[138] S. Gregor,et al. Measuring Human-Computer Trust , 2000 .
[139] Cathy O'Neil,et al. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2016, Vikalpa: The Journal for Decision Makers.
[140] Shiva K. Pentyala,et al. Trust Evolution Over Time in Explainable AI for Fake News Detection , 2020 .
[141] Francesca Toni,et al. Human-grounded Evaluations of Explanation Methods for Text Classification , 2019, EMNLP.
[142] Martin Wattenberg,et al. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow , 2018, IEEE Transactions on Visualization and Computer Graphics.
[143] David S. Ebert,et al. FinVis: Applied visual analytics for personal financial planning , 2009, 2009 IEEE Symposium on Visual Analytics Science and Technology.
[144] Stephanie Rosenthal,et al. Verbalization: Narration of Autonomous Robot Experience , 2016, IJCAI.
[145] Tamara Munzner,et al. The nested blocks and guidelines model , 2015, Inf. Vis..
[146] Weng-Keen Wong,et al. Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs , 2010, VL/HCC.
[147] M. Sheelagh T. Carpendale,et al. Evaluating Information Visualizations , 2008, Information Visualization.
[148] Deborah Lee,et al. I Trust It, but I Don’t Know Why , 2013, Hum. Factors.
[149] Hinrich Schütze,et al. Evaluating neural network explanation methods using hybrid documents and morphological prediction , 2018, ArXiv.
[150] Bernease Herman,et al. The Promise and Peril of Human Evaluation for Model Interpretability , 2017, ArXiv.
[151] Sarvapali D. Ramchurn,et al. Doing the laundry with agents: a field trial of a future smart energy system in the home , 2014, CHI.
[152] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[153] Bistra N. Dilkina,et al. A Deep Learning Approach for Population Estimation from Satellite Imagery , 2017, GeoHumanities@SIGSPATIAL.
[154] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[155] Béatrice Cahour,et al. Does projection into use improve trust and exploration? An example with a cruise control system , 2009 .
[156] Dieter Schmalstieg,et al. StratomeX: Visual Analysis of Large‐Scale Heterogeneous Genomics Data for Cancer Subtype Characterization , 2012, Comput. Graph. Forum.
[157] Balachander Krishnamurthy,et al. Measuring personalization of web search , 2013, WWW.
[158] Jun Zhu,et al. Analyzing the Training Processes of Deep Generative Models , 2018, IEEE Transactions on Visualization and Computer Graphics.
[159] Leanne M. Hirshfield,et al. The Construct of State-Level Suspicion , 2013, Hum. Factors.
[160] Kenney Ng,et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models , 2016, CHI.
[161] Eric D. Ragan,et al. Investigating the Importance of First Impressions and Explainable AI with Interactive Video Analysis , 2020, CHI Extended Abstracts.
[162] F. Keil,et al. Explanation and understanding , 2015 .
[163] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[164] Trevor Darrell,et al. Women also Snowboard: Overcoming Bias in Captioning Models , 2018, ECCV.
[165] Duen Horng Chau,et al. Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations , 2019, IEEE Transactions on Visualization and Computer Graphics.
[166] Quanshi Zhang,et al. Examining CNN representations with respect to Dataset Bias , 2017, AAAI.
[167] Paul N. Bennett,et al. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems , 2019, CHI.
[168] Jaedeok Kim,et al. Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization , 2017, ArXiv.
[169] Laura A. Dabbish,et al. Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers , 2015, CHI.
[170] Daniel A. Keim,et al. Human-centered machine learning through interactive visualization , 2016 .
[171] Yanjun Qi,et al. Adversarial-Playground: A visualization suite showing how adversarial examples fool deep learning , 2017, 2017 IEEE Symposium on Visualization for Cyber Security (VizSec).
[172] Robert R. Hoffman,et al. Theory → Concepts → Measures but Policies → Metrics , 2018, Macrocognition Metrics and Scenarios.
[173] Heinrich Hußmann,et al. I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars , 2019, CHI Extended Abstracts.
[174] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[175] Deborah L. McGuinness,et al. Toward establishing trust in adaptive agents , 2008, IUI '08.
[176] Zhangyang Wang,et al. Predicting Model Failure using Saliency Maps in Autonomous Driving Systems , 2019, ArXiv.
[177] Nicholas Diakopoulos. Enabling Accountability of Algorithmic Media: Transparency as a Constructive and Critical Lens , 2017 .
[178] Tom Vanallemeersch,et al. Intellingo: An Intelligible Translation Environment , 2018, CHI.
[179] Karrie Karahalios,et al. Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms , 2014 .
[180] Felix Bießmann,et al. Quantifying Interpretability and Trust in Machine Learning Systems , 2019, ArXiv.
[181] Jaegul Choo,et al. Visual Analytics for Explainable Deep Learning , 2018, IEEE Computer Graphics and Applications.
[182] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[183] Michael J. Radzicki,et al. Measuring Change in Mental Models of Complex Dynamic Systems , 2008 .
[184] Latanya Sweeney,et al. Discrimination in online ad delivery , 2013, CACM.
[185] Arvind Satyanarayan,et al. The Building Blocks of Interpretability , 2018 .
[186] Huan Liu,et al. eTrust: understanding trust evolution in an online world , 2012, KDD.
[187] Tal Z. Zarsky,et al. The Trouble with Algorithmic Decisions , 2016 .
[188] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[189] John Schulman,et al. Concrete Problems in AI Safety , 2016, ArXiv.
[190] Xia Hu,et al. Learning Credible Deep Neural Networks with Rationale Regularization , 2019, 2019 IEEE International Conference on Data Mining (ICDM).
[191] Eric Horvitz,et al. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance , 2019, HCOMP.
[192] Maya Cakmak,et al. Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..
[193] Frank E. Ritter,et al. Designs for explaining intelligent agents , 2009, Int. J. Hum. Comput. Stud..
[194] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[195] Jun Yuan,et al. Visual Genealogy of Deep Neural Networks , 2020, IEEE Transactions on Visualization and Computer Graphics.
[196] Simone Stumpf,et al. User Trust in Intelligent Systems: A Journey Over Time , 2016, IUI.
[197] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[198] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[199] Minsuk Kahng,et al. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.
[200] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[201] H. Hastie,et al. A Survey of Explainable AI Terminology , 2019, Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019).
[202] Bernt Schiele,et al. Towards improving trust in context-aware systems by displaying system confidence , 2005, Mobile HCI.
[203] Qian Yang,et al. Why these Explanations? Selecting Intelligibility Types for Explanation Goals , 2019, IUI Workshops.
[204] Carrie J. Cai,et al. The effects of example-based explanations in a machine learning interface , 2019, IUI.
[205] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[206] Stephen Muggleton,et al. How Does Predicate Invention Affect Human Comprehensibility? , 2016, ILP.
[207] Zhen Li,et al. Understanding Hidden Memories of Recurrent Neural Networks , 2017, 2017 IEEE Conference on Visual Analytics Science and Technology (VAST).
[208] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[209] Samuel J. Gershman,et al. Human Evaluation of Models Built for Interpretability , 2019, HCOMP.
[210] Ben Shneiderman,et al. EventAction , 2019, ACM Trans. Interact. Intell. Syst..
[211] W. Keith Edwards,et al. Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..
[212] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[213] B. Shneiderman,et al. EventAction , 2019 .
[214] T. Lombrozo. The structure and function of explanations , 2006, Trends in Cognitive Sciences.
[215] Steven M. Drucker,et al. TeleGam: Combining Visualization and Verbalization for Interpretable Machine Learning , 2019, 2019 IEEE Visualization Conference (VIS).
[216] Gary Klein,et al. Explaining Explanation, Part 3: The Causal Landscape , 2018, IEEE Intelligent Systems.
[217] Weng-Keen Wong,et al. Towards recognizing "cool": can end users help computer vision recognize subjective attributes of objects in images? , 2012, IUI '12.
[218] Yulun Zhang,et al. Attention Bridging Network for Knowledge Transfer , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[219] Martin Wattenberg,et al. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making , 2019, CHI.
[220] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[221] Brian Y. Lim. Improving Understanding , Trust , and Control with Intelligibility in Context-Aware Applications , 2011 .
[222] Zijian Zhang,et al. Dissonance Between Human and Machine Understanding , 2019, Proc. ACM Hum. Comput. Interact..
[223] Adrian Weller,et al. Transparency: Motivations and Challenges , 2019, Explainable AI.
[224] Lei Shi,et al. A user-based taxonomy for deep learning visualization , 2018, Vis. Informatics.
[225] Eric D. Ragan,et al. Open Issues in Combating Fake News: Interpretability as an Opportunity , 2019, ArXiv.
[226] Roderick M. Kramer,et al. Swift trust and temporary groups. , 1996 .
[227] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[228] Raymond J. Mooney,et al. Explaining Recommendations: Satisfaction vs. Promotion , 2005 .
[229] Ming Yin,et al. Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.
[230] Zhangyang Wang,et al. Practical Solutions for Machine Learning Safety in Autonomous Vehicles , 2019, SafeAI@AAAI.
[231] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[232] Duane Szafron,et al. Visual Explanation of Evidence with Additive Classifiers , 2006, AAAI.