暂无分享,去创建一个
Marinka Zitnik | Chirag Agarwal | Himabindu Lakkaraju | Himabindu Lakkaraju | M. Zitnik | Chirag Agarwal
[1] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[2] Toniann Pitassi,et al. Fairness through awareness , 2011, ITCS '12.
[3] Bernhard Pfahringer,et al. Regularisation of neural networks by enforcing Lipschitz continuity , 2018, Machine Learning.
[4] Shuiwang Ji,et al. Explainability in Graph Neural Networks: A Taxonomic Survey , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[5] Yang Liu,et al. Actionable Recourse in Linear Classification , 2018, FAT.
[6] M. Yamada,et al. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks , 2020, IEEE Transactions on Knowledge and Data Engineering.
[7] Himabindu Lakkaraju,et al. How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations , 2020, ArXiv.
[8] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[9] Michael Sejr Schlichtkrull,et al. Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking , 2020, ArXiv.
[10] Marinka Zitnik,et al. Representation Learning for Networks in Biology and Medicine: Advancements, Challenges, and Opportunities , 2021, ArXiv.
[11] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[12] Roger Wattenhofer,et al. Contrastive Graph Neural Network Explanation , 2020, ArXiv.
[13] J. Leskovec,et al. Open Graph Benchmark: Datasets for Machine Learning on Graphs , 2020, NeurIPS.
[14] Janis Klaise,et al. Interpretable Counterfactual Explanations Guided by Prototypes , 2019, ECML/PKDD.
[15] Peter A. Flach,et al. FACE: Feasible and Actionable Counterfactual Explanations , 2020, AIES.
[16] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[17] Somesh Jha,et al. Concise Explanations of Neural Networks using Adversarial Training , 2018, ICML.
[18] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[19] Hossein Azizpour,et al. Explainability Techniques for Graph Convolutional Networks , 2019, ICML 2019.
[20] Alexander Levine,et al. Certifiably Robust Interpretation in Deep Learning , 2019, ArXiv.
[21] Jie Chen,et al. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs , 2020, AAAI.
[22] Jure Leskovec,et al. GNNExplainer: Generating Explanations for Graph Neural Networks , 2019, NeurIPS.
[23] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[24] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[25] Robert Reams,et al. Hadamard inverses, square roots and products of almost semidefinite matrices , 1999 .
[26] Jure Leskovec,et al. Inductive Representation Learning on Large Graphs , 2017, NIPS.
[27] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[28] Anh Nguyen,et al. Explaining Image Classifiers by Removing Input Features Using Generative Models , 2020, ACCV.
[29] Bernhard Schölkopf,et al. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach , 2020, NeurIPS.
[30] M. de Rijke,et al. CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks , 2021, International Conference on Artificial Intelligence and Statistics.
[31] Solon Barocas,et al. The hidden assumptions behind counterfactual explanations and principal reasons , 2019, FAT*.
[32] Lucy J. Colwell,et al. Evaluating Attribution for Graph Neural Networks , 2020, NeurIPS.
[33] Bernhard Schölkopf,et al. Algorithmic Recourse: from Counterfactual Explanations to Interventions , 2020, FAccT.
[34] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[35] Osbert Bastani,et al. Interpretability via Model Extraction , 2017, ArXiv.
[36] Amir-Hossein Karimi,et al. Model-Agnostic Counterfactual Explanations for Consequential Decisions , 2019, AISTATS.
[37] Marinka Zitnik,et al. Towards a Unified Framework for Fair and Stable Graph Representation Learning , 2021, UAI.
[38] Guangyin Jin,et al. Addressing Crime Situation Forecasting Task with Temporal Graph Convolutional Neural Network Approach , 2020, 2020 12th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA).
[39] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[40] Jure Leskovec,et al. Modeling polypharmacy side effects with graph convolutional networks , 2018, bioRxiv.
[41] Geoff Gordon,et al. Inherent Tradeoffs in Learning Fair Representations , 2019, NeurIPS.
[42] Jure Leskovec,et al. Faithful and Customizable Explanations of Black Box Models , 2019, AIES.
[43] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[44] Samuel S. Schoenholz,et al. Neural Message Passing for Quantum Chemistry , 2017, ICML.
[45] Masashi Sugiyama,et al. High-Dimensional Feature Selection by Feature-Wise Kernelized Lasso , 2012, Neural Computation.
[46] A. Barabasi,et al. Network medicine framework for identifying drug-repurposing opportunities for COVID-19 , 2020, Proceedings of the National Academy of Sciences.
[47] Ulrike von Luxburg,et al. Looking deeper into LIME , 2020, ArXiv.
[48] My T. Thai,et al. PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks , 2020, NeurIPS.
[49] Heiko Hoffmann,et al. Explainability Methods for Graph Convolutional Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[50] Bernhard Schölkopf,et al. Measuring Statistical Dependence with Hilbert-Schmidt Norms , 2005, ALT.
[51] Bo Zong,et al. Parameterized Explainer for Graph Neural Network , 2020, NeurIPS.