暂无分享,去创建一个
M. de Rijke | Maarten de Rijke | Fabrizio Silvestri | Ana Lucic | Gabriele Tolomei | Maartje ter Hoeve | Gabriele Tolomei | F. Silvestri | Ana Lucic
[1] Philipp Wintersberger,et al. Operationalizing Human-Centered Perspectives in Explainable AI , 2021, CHI Extended Abstracts.
[2] Fragkiskos D. Malliaros,et al. GraphSVX: Shapley Value Explanations for Graph Neural Networks , 2021, ECML/PKDD.
[3] Baochun Li,et al. Generative Causal Explanations for Graph Neural Networks , 2021, ICML.
[4] Weinan Zhang,et al. MARS: Markov Molecular Sampling for Multi-objective Drug Discovery , 2021, ICLR.
[5] Bogdan Sacaleanu,et al. Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties , 2021, AISTATS.
[6] D. Pedreschi,et al. Benchmarking and survey of explanation methods for black box models , 2021, Data Mining and Knowledge Discovery.
[7] Nitesh V. Chawla,et al. Few-Shot Graph Learning for Molecular Property Prediction , 2021, WWW.
[8] Shuiwang Ji,et al. On Explainability of Graph Neural Networks via Subgraph Explorations , 2021, ICML.
[9] Andrew Smart,et al. The Use and Misuse of Counterfactuals in Ethical Machine Learning , 2021, FAccT.
[10] Shuiwang Ji,et al. Explainability in Graph Neural Networks: A Taxonomic Survey , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[11] Takuya Takagi,et al. Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization , 2020, AAAI.
[12] Thierry Langer,et al. A compact review of molecular property prediction with graph neural networks. , 2020, Drug discovery today. Technologies.
[13] Bo Zong,et al. Parameterized Explainer for Graph Neural Network , 2020, NeurIPS.
[14] Roger Wattenhofer,et al. Contrastive Graph Neural Network Explanation , 2020, ArXiv.
[15] John P. Dickerson,et al. Counterfactual Explanations for Machine Learning: A Review , 2020, ArXiv.
[16] My T. Thai,et al. PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks , 2020, NeurIPS.
[17] Bernhard Schölkopf,et al. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects , 2020, ArXiv.
[18] Nicola De Cao,et al. Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking , 2020, ICLR.
[19] Patrick Pantel,et al. Preserving integrity in online social networks , 2020, Commun. ACM.
[20] Timo Freiesleben,et al. The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples , 2020, Minds and Machines.
[21] K. Branson,et al. Meta-Learning GNN Initializations for Low-Resource Molecular Property Prediction , 2020 .
[22] Shinichi Nakajima,et al. XAI for Graphs: Explaining Graph Neural Network Predictions by Identifying Relevant Walks , 2020, ArXiv.
[23] Shuiwang Ji,et al. XGNN: Towards Model-Level Explanations of Graph Neural Networks , 2020, KDD.
[24] Christopher Ré,et al. Machine Learning on Graphs: A Model and Comprehensive Taxonomy , 2020, J. Mach. Learn. Res..
[25] Emma J. Chory,et al. A Deep Learning Approach to Antibiotic Discovery , 2020, Cell.
[26] Qiang Huang,et al. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks , 2020, IEEE Transactions on Knowledge and Data Engineering.
[27] Solon Barocas,et al. The hidden assumptions behind counterfactual explanations and principal reasons , 2019, FAT*.
[28] M. de Rijke,et al. FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles , 2019, AAAI.
[29] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[30] M. de Rijke,et al. Why does my model fail?: contrastive local explanations for retail forecasting , 2019, FAT*.
[31] Heiko Hoffmann,et al. Explainability Methods for Graph Convolutional Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Amir-Hossein Karimi,et al. Model-Agnostic Counterfactual Explanations for Consequential Decisions , 2019, AISTATS.
[33] Pietro Liò,et al. Drug-Drug Adverse Effect Prediction with Graph Co-Attention , 2019, ArXiv.
[34] Hossein Azizpour,et al. Explainability Techniques for Graph Convolutional Networks , 2019, ICML 2019.
[35] Tijl De Bie,et al. ExplaiNE: An Approach for Explaining Network Embedding-based Link Predictions , 2019, ArXiv.
[36] J. Leskovec,et al. GNNExplainer: Generating Explanations for Graph Neural Networks , 2019, NeurIPS.
[37] Philip S. Yu,et al. Adversarial Attack and Defense on Graph Data: A Survey , 2018, IEEE Transactions on Knowledge and Data Engineering.
[38] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[39] Freddy Lécué,et al. Explainable AI: The New 42? , 2018, CD-MAKE.
[40] Razvan Pascanu,et al. Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.
[41] Franco Turini,et al. Local Rule-Based Explanations of Black Box Decision Systems , 2018, ArXiv.
[42] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[43] Jure Leskovec,et al. Modeling polypharmacy side effects with graph convolutional networks , 2018, bioRxiv.
[44] Marie-Jeanne Lesot,et al. Inverse Classification for Comparison-based Interpretability in Machine Learning , 2017, ArXiv.
[45] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[46] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[47] Fabrizio Silvestri,et al. Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking , 2017, KDD.
[48] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[49] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[50] R. Venkatesh Babu,et al. Training Sparse Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[51] Max Welling,et al. Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.
[52] Marco Tulio Ribeiro,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, HLT-NAACL Demos.
[53] Masashi Sugiyama,et al. High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso , 2012, Neural Computation.
[54] A. Debnath,et al. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity. , 1991, Journal of medicinal chemistry.
[55] Ilia Stepin,et al. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence , 2021, IEEE Access.
[56] Philip S. Yu,et al. A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[57] Wojciech Samek,et al. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , 2019, Explainable AI.
[58] Been Kim,et al. Considerations for Evaluation and Generalization in Interpretable Machine Learning , 2018 .
[59] Marco Tulio Ribeiro,et al. “ Why Should I Trust You ? ” Explaining the Predictions of Any Classifier , 2016 .