Visualizing the Impact of Feature Attribution Baselines

[1]  Josh Bongard,et al.  A scalable pipeline for designing reconfigurable organisms , 2020, Proceedings of the National Academy of Sciences.

[2]  Scott Lundberg,et al.  Explaining Models by Propagating Shapley Values of Local Components , 2019, Explainable AI in Healthcare and Medicine.

[3]  Dominik Janzing,et al.  Feature relevance quantification in explainable AI: A causality problem , 2019, AISTATS.

[4]  Alexander Wong,et al.  Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms , 2019, ArXiv.

[5]  Mukund Sundararajan,et al.  The many Shapley values for model explanation , 2019, ICML.

[6]  Chris Reinke,et al.  Intrinsically Motivated Exploration for Automated Discovery of Patterns in Morphogenetic Systems , 2019, ArXiv.

[7]  Been Kim,et al.  Benchmarking Attribution Methods with Relative Feature Importance , 2019, 1907.09701.

[8]  Pascal Sturmfels,et al.  Learning Explainable Models Using Attribution Priors , 2019, ArXiv.

[9]  Tolga Bolukbasi,et al.  XRAI: Better Attributions Through Regions , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[10]  G. Corrado,et al.  Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy. , 2019, Ophthalmology.

[11]  Alexei A. Efros,et al.  Learning to Control Self-Assembling Morphologies: A Study of Generalization via Modularity , 2019, NeurIPS.

[12]  Chih-Kuan Yeh,et al.  On the (In)fidelity and Sensitivity for Explanations. , 2019, 1901.09392.

[13]  Bert Wang-Chak Chan,et al.  Lenia - Biology of Artificial Life , 2018, Complex Syst..

[14]  Masun Nabhan Homsi,et al.  Ensembling convolutional and long short-term memory networks for electrocardiogram arrhythmia detection , 2018, Physiological measurement.

[15]  Shuichi Takayama,et al.  Perspective: The promise of multi-cellular engineered living systems , 2018, APL bioengineering.

[16]  Been Kim,et al.  Sanity Checks for Saliency Maps , 2018, NeurIPS.

[17]  William Gilpin,et al.  Cellular automata as convolutional neural networks , 2018, Physical review. E.

[18]  Sebastian Risi,et al.  CA-NEAT: Evolved Compositional Pattern Producing Networks for Cellular Automata Morphogenesis and Replication , 2018, IEEE Transactions on Cognitive and Developmental Systems.

[19]  Matthias Scheutz,et al.  Modeling Cell Migration in a Simulated Bioelectrical Signaling Network for Anatomical Regeneration , 2018, ALIFE.

[20]  Michael Levin,et al.  Pattern Regeneration in Coupled Networks , 2018, ALIFE.

[21]  D. Erhan,et al.  A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.

[22]  Kate Saenko,et al.  RISE: Randomized Input Sampling for Explanation of Black-box Models , 2018, BMVC.

[23]  Ankur Taly,et al.  A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values , 2018, ArXiv.

[24]  Michael Levin,et al.  Bioelectrical control of positional information in development and regeneration: A review of conceptual and computational advances. , 2018, Progress in biophysics and molecular biology.

[25]  Arvind Satyanarayan,et al.  The Building Blocks of Interpretability , 2018 .

[26]  Emily Chen,et al.  How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.

[27]  Yue Wang,et al.  Dynamic Graph CNN for Learning on Point Clouds , 2018, ACM Trans. Graph..

[28]  Andrea Vedaldi,et al.  Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[29]  Martin Wattenberg,et al.  Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.

[30]  Cengiz Öztireli,et al.  Towards better understanding of gradient-based attribution methods for Deep Neural Networks , 2017, ICLR.

[31]  Dumitru Erhan,et al.  The (Un)reliability of saliency methods , 2017, Explainable AI.

[32]  Abubakar Abid,et al.  Interpretation of Neural Networks is Fragile , 2017, AAAI.

[33]  M. Dorigo,et al.  Mergeable nervous systems for robots , 2017, Nature Communications.

[34]  Matthias Scheutz,et al.  Investigating the effects of noise on a cell-to-cell communication mechanism for structure regeneration , 2017, ECAL.

[35]  Martin Wattenberg,et al.  SmoothGrad: removing noise by adding noise , 2017, ArXiv.

[36]  Michael Levin,et al.  Long-Term, Stochastic Editing of Regenerative Anatomy via Targeting Endogenous Bioelectric Gradients , 2017, Biophysical journal.

[37]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[38]  Andrea Vedaldi,et al.  Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[39]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[40]  Andrew Slavin Ross,et al.  Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.

[41]  Ankur Taly,et al.  Axiomatic Attribution for Deep Networks , 2017, ICML.

[42]  Karlis Freivalds,et al.  Improving the Neural GPU Architecture for Algorithm Learning , 2017, ArXiv.

[43]  Ankur Taly,et al.  Gradients of Counterfactuals , 2016, ArXiv.

[44]  Giovanni Pezzulo,et al.  Top-down models in biology: explanation and control of complex living systems above the molecular level , 2016, Journal of The Royal Society Interface.

[45]  Abhishek Das,et al.  Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[46]  M. Levin,et al.  Physiological inputs regulate species-specific anatomy during embryogenesis and regeneration , 2016, Communicative & integrative biology.

[47]  Alexander Binder,et al.  Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers , 2016, ICANN.

[48]  Tim Salimans,et al.  Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.

[49]  Sergey Ioffe,et al.  Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.

[50]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[51]  Bolei Zhou,et al.  Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[52]  G. Pezzulo,et al.  Re-membering the body: applications of computational neuroscience to the top-down control of regeneration of limbs and other complex organs. , 2015, Integrative biology : quantitative biosciences from nano to macro.

[53]  Lukasz Kaiser,et al.  Neural GPUs Learn Algorithms , 2015, ICLR.

[54]  Alán Aspuru-Guzik,et al.  Convolutional Networks on Graphs for Learning Molecular Fingerprints , 2015, NIPS.

[55]  Andrea Vedaldi,et al.  Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[56]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[57]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[58]  Radhika Nagpal,et al.  Kilobot: A low cost scalable robot system for collective behaviors , 2012, 2012 IEEE International Conference on Robotics and Automation.

[59]  Michael Levin,et al.  Transmembrane voltage potential controls embryonic eye patterning in Xenopus laevis , 2012, Development.

[60]  Stephan Rafler,et al.  Generalization of Conway's "Game of Life" to a continuous domain - SmoothLife , 2011, 1111.1567.

[61]  Wilfried Elmenreich,et al.  Evolving Self-organizing Cellular Automata Based on Neural Network Genotypes , 2011, IWSOS.

[62]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[63]  Takashi Gojobori,et al.  Long-range neural and gap junction protein-mediated cues control polarity during planarian regeneration. , 2010, Developmental biology.

[64]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[65]  Julian Francis Miller,et al.  Evolving a Self-Repairing, Self-Regulating, French Flag Organism , 2004, GECCO.

[66]  Bruce Edmonds,et al.  Social Intelligence , 1999, Computational and mathematical organization theory.

[67]  Deborah Silver,et al.  Feature Visualization , 1994, Scientific Visualization.

[68]  J. E. Pearson Complex Patterns in a Simple System , 1993, Science.

[69]  John A. Hertz,et al.  Learning Cellular Automation Dynamics with Neural Networks , 1992, NIPS.

[70]  Craig W. Reynolds Flocks, herds, and schools: a distributed behavioral model , 1987, SIGGRAPH.

[71]  L. Shapley,et al.  Values of Non-Atomic Games , 1974 .

[72]  John von Neumann,et al.  Theory Of Self Reproducing Automata , 1967 .

[73]  A. M. Turing,et al.  The chemical basis of morphogenesis , 1952, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences.

[74]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[75]  Alexander Mordvintsev,et al.  Inceptionism: Going Deeper into Neural Networks , 2015 .