Conceptual Explanations of Neural Network Prediction for Time Series
暂无分享,去创建一个
Andreas Dengel | Ferdinand Küsters | Sheraz Ahmed | Peter Schichtel | A. Dengel | Sheraz Ahmed | P. Schichtel | Ferdinand Küsters
[1] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[2] Andreas Dengel,et al. TSXplain: Demystification of DNN Decisions for Time-Series using Natural Language and Statistical Features , 2019, ICANN.
[3] Andreas Dengel,et al. What do Deep Networks Like to See? , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[4] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[5] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[6] Andreas Dengel,et al. TSViz: Demystification of Deep Learning Models for Time-Series Analysis , 2018, IEEE Access.
[7] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[8] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[9] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[10] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[11] Alexander Binder,et al. Layer-Wise Relevance Propagation for Deep Neural Network Architectures , 2016 .
[12] Yash Goyal,et al. Explaining Classifiers with Causal Concept Effect (CaCE) , 2019, ArXiv.
[13] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[14] Joachim Diederich,et al. Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..
[15] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[16] Klaus-Robert Müller,et al. iNNvestigate neural networks! , 2018, J. Mach. Learn. Res..