iNNvestigate-GUI - Explaining Neural Networks Through an Interactive Visualization Tool

In recent years, deep neural networks have reached state of the art performance across many different domains. Computer vision in particular has benefited immensely from deep learning. Despite their high performance, deep neural networks often lack interpretability and are mostly regarded as a black box. Therefore, the availability of tools capable to provide insights into the models and identify potential errors is crucial. Such tools need to seamlessly integrate within the workflow of data scientists and ML researchers. In this paper we propose iNNvestigate-GUI, an open-source graphical toolbox which offers an extensive set of functionalities for users to compare different networks behavior and give an explanation to their outputs.

[1]  Minsuk Kahng,et al.  Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers , 2018, IEEE Transactions on Visualization and Computer Graphics.

[2]  Martin Wattenberg,et al.  Direct-Manipulation Visualization of Deep Networks , 2017, ArXiv.

[3]  Alexander M. Rush,et al.  LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.

[4]  Minsuk Kahng,et al.  ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models , 2017, IEEE Transactions on Visualization and Computer Graphics.

[5]  Klaus-Robert Müller,et al.  iNNvestigate neural networks! , 2018, J. Mach. Learn. Res..

[6]  Sebastian Grottel,et al.  Visualizations of Deep Neural Networks in Computer Vision: A Survey , 2017 .

[7]  Minsuk Kahng,et al.  CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization , 2020, IEEE transactions on visualization and computer graphics.

[8]  Elmar Eisemann,et al.  DeepEyes: Progressive Visual Analytics for Designing Deep Neural Networks , 2018, IEEE Transactions on Visualization and Computer Graphics.

[9]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Neural Networks , 2013 .

[10]  Duen Horng Chau,et al.  Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations , 2019, IEEE Transactions on Visualization and Computer Graphics.

[11]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[12]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[13]  Martin Wattenberg,et al.  Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow , 2018, IEEE Transactions on Visualization and Computer Graphics.

[14]  Zhen Li,et al.  Towards Better Analysis of Deep Convolutional Neural Networks , 2016, IEEE Transactions on Visualization and Computer Graphics.

[15]  Hao Yang,et al.  GANViz: A Visual Analytics Approach to Understand the Adversarial Game , 2018, IEEE Transactions on Visualization and Computer Graphics.

[16]  Martin Wattenberg,et al.  Embedding Projector: Interactive Visualization and Interpretation of Embeddings , 2016, ArXiv.