暂无分享,去创建一个
[1] Koray Kavukcuoglu,et al. Multiple Object Recognition with Visual Attention , 2014, ICLR.
[2] Mariusz Bojarski,et al. VisualBackProp: Efficient Visualization of CNNs for Autonomous Driving , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[3] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[4] Emilee J. Rader,et al. Explanations as Mechanisms for Supporting Algorithmic Transparency , 2018, CHI.
[5] Alex Endert,et al. 7 key challenges for visualization in cyber network defense , 2014, VizSEC.
[6] Cynthia Rudin,et al. Falling Rule Lists , 2014, AISTATS.
[7] Eric Horvitz,et al. Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure , 2018, HCOMP.
[8] Tamara Munzner,et al. The nested blocks and guidelines model , 2015, Inf. Vis..
[9] F. Keil,et al. Explanation and understanding , 2015 .
[10] Kristina Lerman,et al. A Survey on Bias and Fairness in Machine Learning , 2019, ACM Comput. Surv..
[11] Jaedeok Kim,et al. Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization , 2017, ArXiv.
[12] Laura A. Dabbish,et al. Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers , 2015, CHI.
[13] Felix Bießmann,et al. Quantifying Interpretability and Trust in Machine Learning Systems , 2019, ArXiv.
[14] Izak Benbasat,et al. Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice , 1999, MIS Q..
[15] Michael Chromik,et al. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems , 2019, IUI Workshops.
[16] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[17] Raquel Flórez López,et al. Enhancing accuracy and interpretability of ensemble strategies in credit risk assessment. A correlated-adjusted decision forest proposal , 2015, Expert Syst. Appl..
[18] Arvind Satyanarayan,et al. The Building Blocks of Interpretability , 2018 .
[19] Minsuk Kahng,et al. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers , 2018, IEEE Transactions on Visualization and Computer Graphics.
[20] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[21] Deborah L. McGuinness,et al. Toward establishing trust in adaptive agents , 2008, IUI '08.
[22] R. Kennedy,et al. Defense Advanced Research Projects Agency (DARPA). Change 1 , 1996 .
[23] Jeroen van den Hoven,et al. Breaking the filter bubble: democracy and design , 2015, Ethics and Information Technology.
[24] Karrie Karahalios,et al. "Be Careful; Things Can Be Worse than They Appear": Understanding Biased Algorithms and Users' Behavior Around Them in Rating Platforms , 2017, ICWSM.
[25] Jeffrey M. Bradshaw,et al. Myths of Automation, Part 2: Some Very Human Consequences , 2014, IEEE Intelligent Systems.
[26] Deborah Lee,et al. I Trust It, but I Don’t Know Why , 2013, Hum. Factors.
[27] Roderick M. Kramer,et al. Swift trust and temporary groups. , 1996 .
[28] Li Chen,et al. Trust building with explanation interfaces , 2006, IUI '06.
[29] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[30] Nicholas Diakopoulos. Enabling Accountability of Algorithmic Media: Transparency as a Constructive and Critical Lens , 2017 .
[31] Mouzhi Ge,et al. How should I explain? A comparison of different explanation types for recommender systems , 2014, Int. J. Hum. Comput. Stud..
[32] Andrea Bunt,et al. Are explanations always important?: a study of deployed, low-cost intelligent interactive systems , 2012, IUI '12.
[33] Zhangyang Wang,et al. Predicting Model Failure using Saliency Maps in Autonomous Driving Systems , 2019, ArXiv.
[34] Jun Zhu,et al. Analyzing the Training Processes of Deep Generative Models , 2018, IEEE Transactions on Visualization and Computer Graphics.
[35] Anind K. Dey,et al. Support for context-aware intelligibility and control , 2009, CHI.
[36] Mark R. Lehto,et al. Foundations for an Empirically Determined Scale of Trust in Automated Systems , 2000 .
[37] Daniel A. Keim,et al. The Role of Uncertainty, Awareness, and Trust in Visual Analytics , 2016, IEEE Transactions on Visualization and Computer Graphics.
[38] S. Gregor,et al. Measuring Human-Computer Trust , 2000 .
[39] Maya Cakmak,et al. Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..
[40] Frank E. Ritter,et al. Designs for explaining intelligent agents , 2009, Int. J. Hum. Comput. Stud..
[41] Leanne M. Hirshfield,et al. The Construct of State-Level Suspicion , 2013, Hum. Factors.
[42] Stephen Muggleton,et al. How Does Predicate Invention Affect Human Comprehensibility? , 2016, ILP.
[43] Heinrich Hußmann,et al. I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars , 2019, CHI Extended Abstracts.
[44] Weng-Keen Wong,et al. Towards recognizing "cool": can end users help computer vision recognize subjective attributes of objects in images? , 2012, IUI '12.
[45] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[46] Gary Klein,et al. Improving Users' Mental Models of Intelligent Software Tools , 2011, IEEE Intelligent Systems.
[47] Catherine Plaisant,et al. The challenge of information visualization evaluation , 2004, AVI.
[48] W. Keith Edwards,et al. Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..
[49] Adrian Weller,et al. Transparency: Motivations and Challenges , 2019, Explainable AI.
[50] Stephanie Rosenthal,et al. Verbalization: Narration of Autonomous Robot Experience , 2016, IJCAI.
[51] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[52] Quentin Pleple,et al. Interactive Topic Modeling , 2013 .
[53] Duen Horng Chau,et al. Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations , 2019, IEEE Transactions on Visualization and Computer Graphics.
[54] Bernease Herman,et al. The Promise and Peril of Human Evaluation for Model Interpretability , 2017, ArXiv.
[55] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[56] Sarvapali D. Ramchurn,et al. Doing the laundry with agents: a field trial of a future smart energy system in the home , 2014, CHI.
[57] Kenney Ng,et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models , 2016, CHI.
[58] Huan Liu,et al. eTrust: understanding trust evolution in an online world , 2012, KDD.
[59] Shagun Jhaver,et al. Algorithmic Anxiety and Coping Strategies of Airbnb Hosts , 2018, CHI.
[60] William J. Clancey,et al. Explaining Explanation, Part 4: A Deep Dive on Deep Nets , 2018, IEEE Intelligent Systems.
[61] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[62] Melanie Tory,et al. Human factors in visualization research , 2004, IEEE Transactions on Visualization and Computer Graphics.
[63] David S. Ebert,et al. FinVis: Applied visual analytics for personal financial planning , 2009, 2009 IEEE Symposium on Visual Analytics Science and Technology.
[64] Daniel A. Keim,et al. Human-centered machine learning through interactive visualization , 2016 .
[65] Aniket Kittur,et al. Crowdsourcing user studies with Mechanical Turk , 2008, CHI.
[66] Thomas G. Dietterich,et al. Interacting meaningfully with machine learning systems: Three experiments , 2009, Int. J. Hum. Comput. Stud..
[67] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[68] E. Langer,et al. The Mindlessness of Ostensibly Thoughtful Action: The Role of "Placebic" Information in Interpersonal Interaction , 1978 .
[69] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[70] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[71] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[72] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[73] Jichen Zhu,et al. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).
[74] Karrie Karahalios,et al. Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms , 2014 .
[75] Min Kyung Lee,et al. Procedural Justice in Algorithmic Fairness , 2019, Proc. ACM Hum. Comput. Interact..
[76] Mike Wu,et al. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability , 2017, AAAI.
[77] Alex Pentland,et al. Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.
[78] Todd Kulesza,et al. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.
[79] Shie Mannor,et al. Graying the black box: Understanding DQNs , 2016, ICML.
[80] Alexander Binder,et al. Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[81] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[82] Judith Masthoff,et al. Designing and Evaluating Explanations for Recommender Systems , 2011, Recommender Systems Handbook.
[83] Karin Coninx,et al. PervasiveCrystal: Asking and Answering Why and Why Not Questions about Pervasive Computing Applications , 2010, 2010 Sixth International Conference on Intelligent Environments.
[84] Elmar Eisemann,et al. DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks , 2018, IEEE Transactions on Visualization and Computer Graphics.
[85] Marko Bohanec,et al. Perturbation-Based Explanations of Prediction Models , 2018, Human and Machine Learning.
[86] Heinrich Hußmann,et al. The Impact of Placebic Explanations on Trust in Intelligent Systems , 2019, CHI Extended Abstracts.
[87] Bernt Schiele,et al. Towards improving trust in context-aware systems by displaying system confidence , 2005, Mobile HCI.
[88] Latanya Sweeney,et al. Discrimination in online ad delivery , 2013, CACM.
[89] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[90] Duane Szafron,et al. Visual Explanation of Evidence with Additive Classifiers , 2006, AAAI.
[91] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[92] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[93] Bonnie M. Muir,et al. Trust Between Humans and Machines, and the Design of Decision Aids , 1987, Int. J. Man Mach. Stud..
[94] Hinrich Schütze,et al. Evaluating neural network explanation methods using hybrid documents and morphological agreement , 2018 .
[95] Eric D. Ragan,et al. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems , 2019, HCOMP.
[96] Simone Stumpf,et al. User Trust in Intelligent Systems: A Journey Over Time , 2016, IUI.
[97] Rebecca Gray,et al. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.
[98] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[99] Brent Mittelstadt,et al. Automation, Algorithms, and Politics| Auditing for Transparency in Content Personalization Systems , 2016 .
[100] Bistra N. Dilkina,et al. A Deep Learning Approach for Population Estimation from Satellite Imagery , 2017, GeoHumanities@SIGSPATIAL.
[101] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[102] Jaegul Choo,et al. Visual Analytics for Explainable Deep Learning , 2018, IEEE Computer Graphics and Applications.
[103] James Zou,et al. Towards Automatic Concept-based Explanations , 2019, NeurIPS.
[104] Colin M. Gray,et al. The Dark (Patterns) Side of UX Design , 2018, CHI.
[105] Raymond J. Mooney,et al. Explaining Recommendations: Satisfaction vs. Promotion , 2005 .
[106] Robert A. Bridges,et al. Situ: Identifying and Explaining Suspicious Behavior in Networks , 2019, IEEE Transactions on Visualization and Computer Graphics.
[107] Jaegul Choo,et al. iVisClassifier: An interactive visual analytics system for classification based on supervised dimension reduction , 2010, 2010 IEEE Symposium on Visual Analytics Science and Technology.
[108] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[109] M. Sheelagh T. Carpendale,et al. Evaluating Information Visualizations , 2008, Information Visualization.
[110] Adrian Weller,et al. Challenges for Transparency , 2017, ArXiv.
[111] Yindalon Aphinyanagphongs,et al. A Workflow for Visual Diagnostics of Binary Classifiers using Instance-Level Explanations , 2017, 2017 IEEE Conference on Visual Analytics Science and Technology (VAST).
[112] Zhen Li,et al. Understanding Hidden Memories of Recurrent Neural Networks , 2017, 2017 IEEE Conference on Visual Analytics Science and Technology (VAST).
[113] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[114] Eric W. Weisstein,et al. Closed-Form Solution , 2002 .
[115] Jian Pei,et al. Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution , 2018, KDD.
[116] Tal Z. Zarsky,et al. The Trouble with Algorithmic Decisions , 2016 .
[117] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[118] Zhen Li,et al. Towards Better Analysis of Deep Convolutional Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.
[119] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[120] T. Lombrozo. The structure and function of explanations , 2006, Trends in Cognitive Sciences.
[121] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[122] Madeleine Udell,et al. Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved , 2018, FAT.
[123] Tamara Munzner,et al. A Nested Model for Visualization Design and Validation , 2009, IEEE Transactions on Visualization and Computer Graphics.
[124] Samuel J. Gershman,et al. Human Evaluation of Models Built for Interpretability , 2019, HCOMP.
[125] Per Ola Kristensson,et al. A Review of User Interface Design for Interactive Machine Learning , 2018, ACM Trans. Interact. Intell. Syst..
[126] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[127] Trevor Darrell,et al. Women also Snowboard: Overcoming Bias in Captioning Models , 2018, ECCV.
[128] Kristina Höök,et al. Steps to take before intelligent user interfaces become real , 2000, Interact. Comput..
[129] Samuel C. Woolley,et al. Automating power: Social bot interference in global politics , 2016, First Monday.
[130] Dhruv Batra,et al. Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions? , 2016, EMNLP.
[131] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[132] Jouni Markkula,et al. EU General Data Protection Regulation: Changes and implications for personal data collecting companies , 2017, Comput. Law Secur. Rev..
[133] Melanie Tory,et al. Evaluating Visualizations: Do Expert Reviews Work? , 2005, IEEE Computer Graphics and Applications.
[134] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[135] Baining Guo,et al. TopicPanorama: A Full Picture of Relevant Topics , 2014, IEEE Transactions on Visualization and Computer Graphics.
[136] Eric Horvitz,et al. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance , 2019, HCOMP.
[137] Francesca Toni,et al. Human-grounded Evaluations of Explanation Methods for Text Classification , 2019, EMNLP.
[138] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[139] Tony Doyle,et al. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..
[140] Yanjun Qi,et al. Adversarial-Playground: A visualization suite showing how adversarial examples fool deep learning , 2017, 2017 IEEE Symposium on Visualization for Cyber Security (VizSec).
[141] David Weinberger,et al. Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.
[142] Alex Groce,et al. You Are the Only Possible Oracle: Effective Test Selection for End Users of Interactive Machine Learning Systems , 2014, IEEE Transactions on Software Engineering.
[143] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[144] Martin Wattenberg,et al. Direct-Manipulation Visualization of Deep Networks , 2017, ArXiv.
[145] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[146] Stefan N. Groesser,et al. A comprehensive method for comparing mental models of dynamic systems , 2011, Eur. J. Oper. Res..
[147] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[148] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[149] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[150] Paul N. Bennett,et al. Guidelines for Human-AI Interaction , 2019, CHI.
[151] Wolfgang Minker,et al. Probabilistic Human-Computer Trust Handling , 2014, SIGDIAL Conference.
[152] Sean A. Munson,et al. When (ish) is My Bus?: User-centered Visualizations of Uncertainty in Everyday, Mobile Predictive Systems , 2016, CHI.
[153] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[154] Weng-Keen Wong,et al. Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs , 2010, VL/HCC.
[155] Jo Vermeulen,et al. From today's augmented houses to tomorrow's smart homes: new directions for home automation research , 2014, UbiComp.
[156] Brad A. Myers,et al. Answering why and why not questions in user interfaces , 2006, CHI.
[157] Margaret M. Burnett,et al. Toward Foraging for Understanding of StarCraft Agents: An Empirical Study , 2017, IUI.
[158] Quanshi Zhang,et al. Examining CNN representations with respect to Dataset Bias , 2017, AAAI.
[159] Matteo Turilli,et al. The ethics of information transparency , 2009, Ethics and Information Technology.
[160] Zijian Zhang,et al. Dissonance Between Human and Machine Understanding , 2019, Proc. ACM Hum. Comput. Interact..
[161] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[162] Alex Endert,et al. Evaluating Interactive Graphical Encodings for Data Visualization , 2018, IEEE Transactions on Visualization and Computer Graphics.
[163] Margaret M. Burnett,et al. What Should Be in an XAI Explanation? What IFT Reveals , 2018, IUI Workshops.
[164] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[165] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[166] Anind K. Dey,et al. Assessing demand for intelligibility in context-aware applications , 2009, UbiComp.
[167] Jeffrey M. Bradshaw,et al. Trust in Automation , 2013, IEEE Intelligent Systems.
[168] Mark Bilandzic,et al. Bringing Transparency Design into Practice , 2018, IUI.
[169] Qian Yang,et al. Why these Explanations? Selecting Intelligibility Types for Explanation Goals , 2019, IUI Workshops.
[170] James J. Thomas,et al. Visualizing the non-visual: spatial analysis and interaction with information from text documents , 1995, Proceedings of Visualization 1995 Conference.
[171] Eric D. Ragan,et al. A Human-Grounded Evaluation Benchmark for Local Explanations of Machine Learning , 2018, ArXiv.
[172] Michael Carl Tschantz,et al. Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..
[173] Dieter Schmalstieg,et al. StratomeX: Visual Analysis of Large‐Scale Heterogeneous Genomics Data for Cancer Subtype Characterization , 2012, Comput. Graph. Forum.
[174] Balachander Krishnamurthy,et al. Measuring personalization of web search , 2013, WWW.
[175] Gary Klein,et al. Explaining Explanation, Part 2: Empirical Foundations , 2017, IEEE Intelligent Systems.
[176] Steven M. Drucker,et al. TeleGam: Combining Visualization and Verbalization for Interpretable Machine Learning , 2019, 2019 IEEE Visualization Conference (VIS).
[177] Philip N. Howard,et al. Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum , 2016, ArXiv.
[178] Eric D. Ragan,et al. Open Issues in Combating Fake News: Interpretability as an Opportunity , 2019, ArXiv.
[179] Alexander M. Rush,et al. LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.
[180] Béatrice Cahour,et al. Does projection into use improve trust and exploration? An example with a cruise control system , 2009 .
[181] K. Mueller,et al. Evolutionary Visual Analysis of Deep Neural Networks , 2017 .
[182] Simone Stumpf,et al. Explaining Smart Heating Systems to Discourage Fiddling with Optimized Behavior , 2018, IUI Workshops.
[183] Gautham J. Mysore,et al. An Efficient Posterior Regularized Latent Variable Model for Interactive Sound Source Separation , 2013, ICML.
[184] Gary Klein,et al. Explaining Explanation, Part 3: The Causal Landscape , 2018, IEEE Intelligent Systems.
[185] Lei Shi,et al. A user-based taxonomy for deep learning visualization , 2018, Vis. Informatics.
[186] Qinying Liao,et al. An Uncertainty-Aware Approach for Exploratory Microblog Retrieval , 2015, IEEE Transactions on Visualization and Computer Graphics.
[187] Dan Conway,et al. How to Recommend?: User Trust Factors in Movie Recommender Systems , 2017, IUI.
[188] Alex Endert,et al. The State of the Art in Integrating Machine Learning into Visual Analytics , 2017, Comput. Graph. Forum.
[189] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[190] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[191] Mike Ananny,et al. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..