Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
暂无分享,去创建一个
Duen Horng Chau | Haekyu Park | Fred Hohman | Caleb Robinson | Fred Hohman | Caleb Robinson | Haekyu Park
[1] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Pascal Vincent,et al. Visualizing Higher-Layer Features of a Deep Network , 2009 .
[3] Arvind Satyanarayan,et al. The Building Blocks of Interpretability , 2018 .
[4] Minsuk Kahng,et al. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers , 2018, IEEE Transactions on Visualization and Computer Graphics.
[5] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[6] Frank van Ham,et al. “Search, Show Context, Expand on Demand”: Supporting Large Graph Exploration with Degree-of-Interest , 2009, IEEE Transactions on Visualization and Computer Graphics.
[7] Kwan-Liu Ma,et al. Visual Recommendations for Network Navigation , 2011, Comput. Graph. Forum.
[8] Amy Nicole Langville,et al. A Survey of Eigenvector Methods for Web Information Retrieval , 2005, SIAM Rev..
[9] Xiaoming Liu,et al. Do Convolutional Neural Networks Learn Class Hierarchy? , 2017, IEEE Transactions on Visualization and Computer Graphics.
[10] Ross Maciejewski,et al. The State‐of‐the‐Art in Predictive Visual Analytics , 2017, Comput. Graph. Forum.
[11] Or Biran,et al. Explanation and Justification in Machine Learning : A Survey Or , 2017 .
[12] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[13] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[14] Alexander Mordvintsev,et al. Inceptionism: Going Deeper into Neural Networks , 2015 .
[15] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[16] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[17] Bongshin Lee,et al. Squares: Supporting Interactive Performance Analysis for Multiclass Classifiers , 2017, IEEE Transactions on Visualization and Computer Graphics.
[18] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[19] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[21] Sang Michael Xie,et al. Combining satellite imagery and machine learning to predict poverty , 2016, Science.
[22] Martin Wattenberg,et al. GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation , 2018, IEEE Transactions on Visualization and Computer Graphics.
[23] David Maxwell Chickering,et al. ModelTracker: Redesigning Performance Analysis Tools for Machine Learning , 2015, CHI.
[24] Leonidas J. Guibas,et al. Taskonomy: Disentangling Task Transfer Learning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[25] Deborah Silver,et al. Feature Visualization , 1994, Scientific Visualization.
[26] Ramprasaath R. Selvaraju,et al. Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance , 2018, ECCV.
[27] Mohan S. Kankanhalli,et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.
[28] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[29] Klaus-Robert Müller,et al. PatternNet and PatternLRP - Improving the interpretability of neural networks , 2017, ArXiv.
[30] George E. Dahl,et al. Artificial Intelligence-Based Breast Cancer Nodal Metastasis Detection: Insights Into the Black Box for Pathologists. , 2018, Archives of pathology & laboratory medicine.
[31] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[32] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[33] Yoshua Bengio,et al. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[35] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[36] Rajeev Motwani,et al. The PageRank Citation Ranking : Bringing Order to the Web , 1999, WWW 1999.
[37] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Li Chen,et al. SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression , 2018, KDD.
[39] Andrea Vedaldi,et al. Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[40] Martin Wattenberg,et al. Direct-Manipulation Visualization of Deep Networks , 2017, ArXiv.
[41] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[42] Martin Wattenberg,et al. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow , 2018, IEEE Transactions on Visualization and Computer Graphics.
[43] Qiang Yang,et al. A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.
[44] David F. Steiner,et al. Impact of Deep Learning Assistance on the Histopathologic Review of Lymph Nodes for Metastatic Breast Cancer , 2018, The American journal of surgical pathology.
[45] G. W. Furnas,et al. Generalized fisheye views , 1986, CHI '86.
[46] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[47] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[48] Adam W. Harley. An Interactive Node-Link Visualization of Convolutional Neural Networks , 2015, ISVC.
[49] Jeffrey Heer,et al. Refinery: Visual Exploration of Large, Heterogeneous Networks through Associative Browsing , 2015, Comput. Graph. Forum.
[50] Minsuk Kahng,et al. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.
[51] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[52] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[53] Leon A. Gatys,et al. Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[54] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[55] Zhen Li,et al. Towards Better Analysis of Deep Convolutional Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.
[56] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[57] Lalana Kagal,et al. J un 2 01 8 Explaining Explanations : An Approach to Evaluating Interpretability of Machine Learning , 2018 .
[58] Jun Zhu,et al. Analyzing the Noise Robustness of Deep Neural Networks , 2018, 2018 IEEE Conference on Visual Analytics Science and Technology (VAST).
[59] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.