暂无分享,去创建一个
Sebastian Houben | Pascal Welke | Matthias Jakobs | Laura von Rueden | Katharina Beckh | Sebastian Muller | Vanessa Toborek | Hanxiao Tan | Raphael Fischer | Laura von Rueden | Pascal Welke | Sebastian Houben | Katharina Beckh | Matthias Jakobs | Raphael Fischer | Vanessa Toborek | Hanxiao Tan | Sebastian Muller | Sebastian Müller
[1] Stephen Muggleton,et al. Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP , 2018, Machine Learning.
[2] K. Kersting,et al. Making deep neural networks right for the right scientific reasons by interacting with their explanations , 2020, Nature Machine Intelligence.
[3] Nagiza F. Samatova,et al. Theory-Guided Data Science: A New Paradigm for Scientific Discovery from Data , 2016, IEEE Transactions on Knowledge and Data Engineering.
[4] Martin Wattenberg,et al. The What-If Tool: Interactive Probing of Machine Learning Models , 2019, IEEE Transactions on Visualization and Computer Graphics.
[5] Luc De Raedt,et al. Inductive Logic Programming: Theory and Methods , 1994, J. Log. Program..
[6] Gérard Bloch,et al. Incorporating prior knowledge in support vector machines for classification: A review , 2008, Neurocomputing.
[7] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[8] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[9] Edgar R. Weippl,et al. Witnesses for the Doctor in the Loop , 2015, BIH.
[10] Deborah L. McGuinness,et al. Directions for Explainable Knowledge-Enabled Systems , 2020, Knowledge Graphs for eXplainable Artificial Intelligence.
[11] Jure Leskovec,et al. Faithful and Customizable Explanations of Black Box Models , 2019, AIES.
[12] Amit Sharma,et al. Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers , 2019, ArXiv.
[13] Paris Perdikaris,et al. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations , 2019, J. Comput. Phys..
[14] Marco F. Huber,et al. A Survey on the Explainability of Supervised Machine Learning , 2020, J. Artif. Intell. Res..
[15] L. Shapley,et al. Values of Non-Atomic Games , 1974 .
[16] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[17] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[18] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[19] Kenney Ng,et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models , 2016, CHI.
[20] Mohit Bansal,et al. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? , 2020, ACL.
[21] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[22] Ana Marasovi'c,et al. Teach Me to Explain: A Review of Datasets for Explainable NLP , 2021, ArXiv.
[23] Alexander Binder,et al. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers , 2016, ICANN.
[24] Derek Doran,et al. What Does Explainable AI Really Mean? A New Conceptualization of Perspectives , 2017, CEx@AI*IA.
[25] Rosane Minghim,et al. An Approach to Supporting Incremental Visual Data Classification , 2015, IEEE Transactions on Visualization and Computer Graphics.
[26] Peter A. Flach,et al. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant , 2018, IJCAI.
[27] Amit Sharma,et al. Explaining machine learning classifiers through diverse counterfactual explanations , 2020, FAT*.
[28] Xun Xue,et al. A Survey of Data-Driven and Knowledge-Aware eXplainable AI , 2020, IEEE Transactions on Knowledge and Data Engineering.
[29] Kristian Kersting,et al. Explanatory Interactive Machine Learning , 2019, AIES.
[30] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[31] Frederick Liu,et al. Incorporating Priors with Feature Attribution on Text Classification , 2019, ACL.
[32] Peter Henderson,et al. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims , 2020, ArXiv.
[33] Andrea Vedaldi,et al. Visualizing Deep Convolutional Neural Networks Using Natural Pre-images , 2015, International Journal of Computer Vision.
[34] Pascal Sturmfels,et al. Improving performance of deep learning models with axiomatic attribution priors and expected gradients , 2020, Nature Machine Intelligence.
[35] Pedro Saleiro,et al. Teaching the Machine to Explain Itself using Domain Knowledge , 2020, ArXiv.
[36] Johanes Schneider,et al. Personalized Explanation for Machine Learning: a Conceptualization , 2019, ECIS.
[37] Le Song,et al. GRAM: Graph-based Attention Model for Healthcare Representation Learning , 2016, KDD.
[38] Bernd Bischl,et al. Multi-Objective Counterfactual Explanations , 2020, PPSN.
[39] Xiting Wang,et al. Towards better analysis of machine learning models: A visual analytics perspective , 2017, Vis. Informatics.
[40] Geoffrey E. Hinton,et al. Distilling a Neural Network Into a Soft Decision Tree , 2017, CEx@AI*IA.
[41] Fabian J. Theis,et al. Learning interpretable latent autoencoder representations with annotations of feature sets , 2020, bioRxiv.
[42] Thomas Lukasiewicz,et al. e-SNLI: Natural Language Inference with Natural Language Explanations , 2018, NeurIPS.
[43] Tarek R. Besold,et al. A historical perspective of explainable Artificial Intelligence , 2020, WIREs Data Mining Knowl. Discov..
[44] Ilia Stepin,et al. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence , 2021, IEEE Access.
[45] Aidong Zhang,et al. Incorporating Biological Knowledge with Factor Graph Neural Network for Interpretable Deep Learning , 2019, ArXiv.
[46] Yiqun Liu,et al. Jointly Learning Explainable Rules for Recommendation with Knowledge Graph , 2019, WWW.
[47] Björn Schuller,et al. eXplainable Cooperative Machine Learning with NOVA , 2020, KI - Künstliche Intelligenz.
[48] Bernd Bischl,et al. Pitfalls to Avoid when Interpreting Machine Learning Models , 2020, ArXiv.
[49] Krzysztof Z. Gajos,et al. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems , 2020, IUI.
[50] Yixin Cao,et al. Explainable Reasoning over Knowledge Graphs for Recommendation , 2018, AAAI.
[51] C. Rudin,et al. Concept whitening for interpretable image recognition , 2020, Nature Machine Intelligence.
[52] L. Shapley. A Value for n-person Games , 1988 .
[53] Zheng-Yu Niu,et al. Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented Graphs , 2019, EMNLP.
[54] Peter A. Flach,et al. One Explanation Does Not Fit All , 2020, KI - Künstliche Intelligenz.
[55] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.
[56] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[57] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[58] Varun Chandola,et al. Tree-based Regularization for Interpretable Readmission Prediction , 2019, AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering.
[59] Albert Gordo,et al. Learning Global Additive Explanations for Neural Nets Using Model Distillation , 2018 .
[60] Ronald M. Summers,et al. Holistic and Comprehensive Annotation of Clinically Significant Findings on Diverse CT Images: Learning From Radiology Reports and Label Ontology , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[61] Fenglong Ma,et al. KAME: Knowledge-based Attention Model for Diagnosis Prediction in Healthcare , 2018, CIKM.
[62] Peter A. Flach,et al. Explainability fact sheets: a framework for systematic assessment of explainable approaches , 2019, FAT*.
[63] Ribana Roscher,et al. Explainable Machine Learning for Scientific Insights and Discoveries , 2019, IEEE Access.
[64] Quanshi Zhang,et al. Interactively Transferring CNN Patterns for Part Localization , 2017, ArXiv.
[65] Thorsten Joachims,et al. Coactive Learning , 2015, J. Artif. Intell. Res..
[66] Hongxia Jin,et al. Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[67] Kay R. Amel. From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning , 2019 .
[68] Viktor K. Prasanna,et al. Understanding web images by object relation network , 2012, WWW.
[69] Birgit Kirsch,et al. Informed Machine Learning -- A Taxonomy and Survey of Integrating Knowledge into Learning Systems , 2019 .
[70] Johannes Jäschke,et al. Combining machine learning and process engineering physics towards enhanced accuracy and explainability of data-driven models , 2020, Comput. Chem. Eng..
[71] Michael Siebers,et al. Enriching Visual with Verbal Explanations for Relational Concepts - Combining LIME with Aleph , 2019, PKDD/ECML Workshops.
[72] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[73] P. Lio’,et al. REM: An Integrative Rule Extraction Methodology for Explainable Data Analysis in Healthcare , 2021, bioRxiv.
[74] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[75] Chandan Singh,et al. Interpretations are useful: penalizing explanations to align neural networks with prior knowledge , 2019, ICML.
[76] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[77] Maya Krishnan,et al. Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning , 2019, Philosophy & Technology.
[78] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[79] Kristian Kersting,et al. Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[80] Marcel van Gerven,et al. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges , 2018, ArXiv.
[81] Bolei Zhou,et al. Understanding Intra-Class Knowledge Inside CNN , 2015, ArXiv.