暂无分享,去创建一个
[1] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[2] Jonas Mueller,et al. What made you do this? Understanding black-box decisions with sufficient input subsets , 2018, AISTATS.
[3] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[4] Dumitru Erhan,et al. A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.
[5] Max Welling,et al. Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.
[6] Lior Wolf,et al. A Formal Approach to Explainability , 2019, AIES.
[7] Ruslan Salakhutdinov,et al. Revisiting Semi-Supervised Learning with Graph Embeddings , 2016, ICML.
[8] My T. Thai,et al. PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks , 2020, NeurIPS.
[9] Megha Khosla,et al. Finding Interpretable Concept Spaces in Node Embeddings using Knowledge Bases , 2019, PKDD/ECML Workshops.
[10] Steven Skiena,et al. DeepWalk: online learning of social representations , 2014, KDD.
[11] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[12] Shuiwang Ji,et al. XGNN: Towards Model-Level Explanations of Graph Neural Networks , 2020, KDD.
[13] Tijl De Bie,et al. ExplaiNE: An Approach for Explaining Network Embedding-based Link Predictions , 2019, ArXiv.
[14] Jan Eric Lenssen,et al. Fast Graph Representation Learning with PyTorch Geometric , 2019, ArXiv.
[15] Thomas Lukasiewicz,et al. The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets , 2020, ArXiv.
[16] Yoav Goldberg,et al. Aligning Faithful Interpretations with their Social Attribution , 2020, ArXiv.
[17] Heiko Hoffmann,et al. Explainability Methods for Graph Convolutional Neural Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[18] C. Sims. Rate–distortion theory and human perception , 2016, Cognition.
[19] Jure Leskovec,et al. GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks , 2019, ArXiv.
[20] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[21] Graham Neubig,et al. Learning to Deceive with Attention-Based Explanations , 2020, ACL.
[22] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[23] M. de Rijke,et al. Do Transformer Attention Heads Provide Transparency in Abstractive Summarization? , 2019, ArXiv.
[24] Jure Leskovec,et al. How Powerful are Graph Neural Networks? , 2018, ICLR.
[25] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[26] Stephan Günnemann,et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank , 2018, ICLR.
[27] Lucy J. Colwell,et al. Evaluating Attribution for Graph Neural Networks , 2020, NeurIPS.
[28] Alexander Binder,et al. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers , 2016, ICANN.
[29] Megha Khosla,et al. A Comparative Study for Unsupervised Network Representation Learning , 2019 .
[30] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.