暂无分享,去创建一个
[1] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[2] Been Kim,et al. Concept Bottleneck Models , 2020, ICML.
[3] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Bolei Zhou,et al. Interpretable Basis Decomposition for Visual Explanation , 2018, ECCV.
[5] Christoph H. Lampert,et al. Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Thomas Brox,et al. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[7] Finale Doshi-Velez,et al. Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction , 2015, NIPS.
[8] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[9] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Marie-Jeanne Lesot,et al. Issues with post-hoc counterfactual explanations: a discussion , 2019, ArXiv.
[11] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[12] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[13] Vineeth N. Balasubramanian,et al. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).
[14] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[15] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[16] Ramprasaath R. Selvaraju,et al. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .
[17] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[19] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[20] Nikos Komodakis,et al. Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.
[21] Zhe L. Lin,et al. Top-Down Neural Attention by Excitation Backprop , 2016, International Journal of Computer Vision.
[22] Marie-Jeanne Lesot,et al. The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations , 2019, IJCAI.
[23] James Zou,et al. Towards Automatic Concept-based Explanations , 2019, NeurIPS.
[24] Alexei A. Efros,et al. Colorful Image Colorization , 2016, ECCV.
[25] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[26] Sergio Escalera,et al. Explainable and Interpretable Models in Computer Vision and Machine Learning , 2018, The Springer Series on Challenges in Machine Learning.
[27] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[28] Cynthia Rudin,et al. Supersparse Linear Integer Models for Interpretable Classification , 2013, 1306.6677.
[29] Yingli Tian,et al. Recognizing American Sign Language Manual Signs from RGB-D Videos , 2019, SSRN Electronic Journal.