暂无分享,去创建一个
Frederick Klauschen | Marius Kloft | Kirill Bykov | Marina M.-C. Höhne | Adelaida Creosteanu | Klaus-Robert Müller | Shinichi Nakajima | M. Kloft | S. Nakajima | F. Klauschen | Marius Kloft | Klaus-Robert Müller | Kirill Bykov | Adelaida Creosteanu
[1] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[2] Klaus-Robert Müller,et al. Understanding Patch-Based Learning of Video Data by Explaining Predictions , 2019, Explainable AI.
[3] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[4] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[5] Klaus-Robert Müller,et al. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications , 2021, Proceedings of the IEEE.
[6] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[7] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[8] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[9] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[10] James Babcock,et al. Artificial General Intelligence , 2016, Lecture Notes in Computer Science.
[11] K. Müller,et al. An adaptive deep reinforcement learning framework enables curling robots with human-like performance in real-world conditions , 2020, Science Robotics.
[12] K-R Müller,et al. Scoring of tumor-infiltrating lymphocytes: From visual estimation to machine learning. , 2018, Seminars in cancer biology.
[13] Georg Langs,et al. Causability and explainability of artificial intelligence in medicine , 2019, WIREs Data Mining Knowl. Discov..
[14] Quoc V. Le,et al. Self-Training With Noisy Student Improves ImageNet Classification , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] M. Maruthappu,et al. Artificial intelligence in medicine: current trends and future possibilities. , 2018, The British journal of general practice : the journal of the Royal College of General Practitioners.
[16] Fabio A. González,et al. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks , 2014, Medical Imaging.
[17] Zhe L. Lin,et al. Top-Down Neural Attention by Excitation Backprop , 2016, International Journal of Computer Vision.
[18] Ulrike von Luxburg,et al. A tutorial on spectral clustering , 2007, Stat. Comput..
[19] Klaus-Robert Müller,et al. ML2Motif—Reliable extraction of discriminative sequence motifs from learning machines , 2017, PloS one.
[20] Jason Yosinski,et al. Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks , 2016, ArXiv.
[21] Michael Arens,et al. Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey , 2019, Mach. Learn. Knowl. Extr..
[22] Klaus-Robert Müller,et al. Layer-Wise Relevance Propagation: An Overview , 2019, Explainable AI.
[23] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Thomas Brox,et al. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks , 2016, NIPS.
[25] Rudolph Triebel,et al. Bayesian Optimization Meets Laplace Approximation for Robotic Introspection , 2020, ArXiv.
[26] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[27] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[28] Gunnar Rätsch,et al. Opening the Black Box: Revealing Interpretable Sequence Motifs in Kernel-Based Learning Algorithms , 2015, ECML/PKDD.
[29] Wendy Ju,et al. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance , 2014, International Journal on Interactive Design and Manufacturing (IJIDeM).
[30] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[31] Pascal Vincent,et al. Visualizing Higher-Layer Features of a Deep Network , 2009 .
[32] Klaus-Robert Müller,et al. Towards Robust Explanations for Deep Neural Networks , 2020, Pattern Recognit..
[33] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[34] Shinichi Nakajima,et al. Towards Best Practice in Explaining Neural Network Decisions with LRP , 2019, 2020 International Joint Conference on Neural Networks (IJCNN).
[35] Alexander Binder,et al. Analyzing Classifiers: Fisher Vectors and Deep Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[37] Klaus-Robert Müller,et al. Resolving challenges in deep learning-based analyses of histopathological images using explanation methods , 2019, Scientific Reports.
[38] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[39] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[40] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[41] Klaus-Robert Müller,et al. "What is relevant in a text document?": An interpretable machine learning approach , 2016, PloS one.
[42] Wojciech Samek,et al. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , 2019, Explainable AI.
[43] Klaus-Robert Müller,et al. Explanations can be manipulated and geometry is to blame , 2019, NeurIPS.
[44] Arun Das,et al. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey , 2020, ArXiv.
[45] David Barber,et al. A Scalable Laplace Approximation for Neural Networks , 2018, ICLR.
[46] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[47] Hugh Chen,et al. From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.
[48] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[49] Heinrich Hußmann,et al. I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars , 2019, CHI Extended Abstracts.
[50] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[51] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[52] Tom Fawcett,et al. An introduction to ROC analysis , 2006, Pattern Recognit. Lett..
[53] Rudolph Triebel,et al. Estimating Model Uncertainty of Neural Networks in Sparse Information Form , 2020, ICML.
[54] Andrew Janowczyk,et al. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases , 2016, Journal of pathology informatics.
[55] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[56] Demis Hassabis,et al. Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm , 2017, ArXiv.
[57] Sebastian Nowozin,et al. How Good is the Bayes Posterior in Deep Neural Networks Really? , 2020, ICML.
[58] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[59] Taesup Moon,et al. Fooling Neural Network Interpretations via Adversarial Model Manipulation , 2019, NeurIPS.
[60] Mohammad Shoeybi,et al. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism , 2019, ArXiv.
[61] Andrew Gordon Wilson,et al. The Case for Bayesian Deep Learning , 2020, ArXiv.
[62] Klaus-Robert Müller,et al. Feature Importance Measure for Non-linear Learning Algorithms , 2016, ArXiv.
[63] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[64] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[65] Mohammad Emtiyaz Khan,et al. Practical Deep Learning with Bayesian Principles , 2019, NeurIPS.
[66] Myunghee Cho Paik,et al. Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation , 2020, Comput. Stat. Data Anal..
[67] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[68] Masaru Ishii,et al. Morphological and molecular breast cancer profiling through explainable machine learning , 2021, Nature Machine Intelligence.
[69] Kirill Bykov,et al. NoiseGrad: enhancing explanations by introducing stochasticity to model weights , 2021, ArXiv.
[70] Matthijs Douze,et al. Fixing the train-test resolution discrepancy: FixEfficientNet , 2020, ArXiv.
[71] Kenneth O. Stanley,et al. Go-Explore: a New Approach for Hard-Exploration Problems , 2019, ArXiv.
[72] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[73] Alexander Binder,et al. Evaluating the Visualization of What a Deep Neural Network Has Learned , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[74] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[75] Andrew Gordon Wilson,et al. A Simple Baseline for Bayesian Uncertainty in Deep Learning , 2019, NeurIPS.
[76] Tom Schaul,et al. StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.
[77] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[78] Ralph Ewerth,et al. Interpretable Semantic Photo Geolocalization , 2021, ArXiv.
[79] Cuntai Guan,et al. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[80] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[81] Lucy R. Chai. Uncertainty Estimation in Bayesian Neural Networks And Links to Interpretability , 2018 .
[82] Andrew Gordon Wilson,et al. Bayesian Deep Learning and a Probabilistic Perspective of Generalization , 2020, NeurIPS.
[83] Xiaodong Liu,et al. Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding , 2019, ArXiv.