Making deep neural networks right for the right scientific reasons by interacting with their explanations
暂无分享,去创建一个
Kristian Kersting | Stefano Teso | Anne-Katrin Mahlein | Xiaoting Shao | Patrick Schramowski | Wolfgang Stammer | Anna Brugger | Hans-Georg Luigs | K. Kersting | Anne-Katrin Mahlein | Xiaoting Shao | Stefano Teso | P. Schramowski | Anna Brugger | Wolfgang Stammer | Hans-Georg Luigs
[1] Andrew McCallum,et al. Toward Optimal Active Learning through Sampling Estimation of Error Reduction , 2001, ICML.
[2] Daphne Koller,et al. Support Vector Machine Active Learning with Applications to Text Classification , 2000, J. Mach. Learn. Res..
[3] Andrew McCallum,et al. Toward Optimal Active Learning through Monte Carlo Estimation of Error Reduction , 2001, ICML 2001.
[4] Corinna Cortes,et al. Support-Vector Networks , 1995, Machine Learning.
[5] Jianguo Zhang,et al. The PASCAL Visual Object Classes Challenge , 2006 .
[6] Rich Caruana,et al. Model compression , 2006, KDD '06.
[7] Luc Van Gool,et al. The 2005 PASCAL Visual Object Classes Challenge , 2005, MLCW.
[8] R. Nowak,et al. Upper and Lower Error Bounds for Active Learning , 2006 .
[9] Andreas Krause,et al. Nonmyopic active learning of Gaussian processes: an exploration-exploitation approach , 2007, ICML '07.
[10] Christine D. Piatko,et al. Using “Annotator Rationales” to Improve Machine Learning for Text Categorization , 2007, NAACL.
[11] J. Simpson. Psychological Foundations of Trust , 2007 .
[12] Ulrike von Luxburg,et al. A tutorial on spectral clustering , 2007, Stat. Comput..
[13] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[14] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[15] Natalie de Souza. High-throughput phenotyping , 2009, Nature Methods.
[16] Maria-Florina Balcan,et al. The true sample complexity of active learning , 2010, Machine Learning.
[17] Carla E. Brodley,et al. The Constrained Weight Space SVM: Learning with Ranked Features , 2011, ICML.
[18] Burr Settles,et al. Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances , 2011, EMNLP.
[19] A. Thomaz,et al. Mixed-Initiative Active Learning , 2012 .
[20] Thomas G. Dietterich,et al. Active Imitation Learning via Reduction to I.I.D. Active Learning , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.
[21] Burr Settles,et al. Active Learning , 2012, Synthesis Lectures on Artificial Intelligence and Machine Learning.
[22] Jeffrey M. Bradshaw,et al. Trust in Automation , 2013, IEEE Intelligent Systems.
[23] Steve Hanneke,et al. Theory of Disagreement-Based Active Learning , 2014, Found. Trends Mach. Learn..
[24] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[25] Esther Lau. Plant genomics: High-throughput phenotyping of rice growth traits , 2014, Nature Reviews Genetics.
[26] Esther Lau. Microbial genetics: Selective killing using programmable Cas9 , 2014, Nature Reviews Genetics.
[27] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[28] Thorsten Joachims,et al. Coactive Learning , 2015, J. Artif. Intell. Res..
[29] Zoubin Ghahramani,et al. Probabilistic machine learning and artificial intelligence , 2015, Nature.
[30] Masooda Bashir,et al. Trust in Automation , 2015, Hum. Factors.
[31] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[32] Michael I. Jordan,et al. Machine learning: Trends, perspectives, and prospects , 2015, Science.
[33] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[34] Scott Lundberg,et al. An unexpected unity among methods for interpreting model predictions , 2016, ArXiv.
[35] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[37] Ramprasaath R. Selvaraju,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[38] Tony P. Pridmore,et al. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping , 2016, bioRxiv.
[39] Osbert Bastani,et al. Interpreting Blackbox Models via Model Extraction , 2017, ArXiv.
[40] T. Pridmore,et al. Plant Phenomics, From Sensors to Knowledge , 2017, Current Biology.
[41] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[42] Demis Hassabis,et al. Mastering the game of Go without human knowledge , 2017, Nature.
[43] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[44] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[45] Sriraam Natarajan,et al. Human-Guided Learning for Probabilistic Logic Models , 2018, Front. Robot. AI.
[46] Moritz Körber,et al. Theoretical Considerations and Development of a Questionnaire to Measure Trust in Automation , 2018, Advances in Intelligent Systems and Computing.
[47] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[48] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[49] Susan T. Dumais,et al. Short-Term Satisfaction and Long-Term Coverage: Understanding How Users Tolerate Algorithmic Exploration , 2018, WSDM.
[50] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[51] Trevor Darrell,et al. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[52] Marcus A. Badgeley,et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study , 2018, PLoS medicine.
[53] Marcus A. Badgeley,et al. Confounding variables can degrade generalization performance of radiological deep learning models , 2018, ArXiv.
[54] Marcus A. Badgeley,et al. Deep learning predicts hip fracture using confounding patient and healthcare variables , 2018, npj Digital Medicine.
[55] Cynthia Rudin,et al. This Looks Like That: Deep Learning for Interpretable Image Recognition , 2018 .
[56] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[57] Wojciech Samek,et al. Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed , 2019, ArXiv.
[58] D. Erhan,et al. A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.
[59] Larsson Omberg,et al. A Permutation Approach to Assess Confounding in Machine Learning Applications for Digital Health , 2019, KDD.
[60] Kristian Kersting,et al. Quantitative and qualitative phenotyping of disease resistance of crops by hyperspectral sensors: seamless interlocking of phytopathology, sensors, and machine learning is needed! , 2019, Current opinion in plant biology.
[61] Frederick Liu,et al. Incorporating Priors with Feature Attribution on Text Classification , 2019, ACL.
[62] Klaus-Robert Müller,et al. Explanations can be manipulated and geometry is to blame , 2019, NeurIPS.
[63] Pascal Sturmfels,et al. Learning Explainable Models Using Attribution Priors , 2019, ArXiv.
[64] Hongxia Jin,et al. Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[65] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[66] Tatsuya Harada,et al. Learning to Explain With Complemental Examples , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Farid Melgani,et al. Computer vision-based phenotyping for improvement of plant productivity: a machine learning perspective , 2018, GigaScience.
[68] Kristian Kersting,et al. Explanatory Interactive Machine Learning , 2019, AIES.
[69] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[70] Oliver Hinz,et al. How and What Can Humans Learn from Being in the Loop? , 2020, KI - Künstliche Intelligenz.