The mythos of model interpretability
暂无分享,去创建一个
[1] Illtyd Trethowan. Causality , 1938 .
[2] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[3] Thomas Richardson,et al. Interpretable Boosted Naïve Bayes Classification , 1998, KDD.
[4] John David N. Dionisio,et al. Case-based explanation of non-case-based learning methods , 1999, AMIA.
[5] B. Winblad,et al. Smoking and the Occurence of Alzheimer's Disease: Cross-Sectional and Longitudinal Data in a Population-based Study , 1999 .
[6] B. Winblad,et al. Smoking and the occurrence of Alzheimer's disease: cross-sectional and longitudinal data in a population-based study. , 1999, American journal of epidemiology.
[7] Changchun Liu,et al. An empirical study of machine learning techniques for affect recognition in human-robot interaction , 2005 .
[8] Changchun Liu,et al. An empirical study of machine learning techniques for affect recognition in human–robot interaction , 2006, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[9] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[10] Chong Wang,et al. Reading Tea Leaves: How Humans Interpret Topic Models , 2009, NIPS.
[11] Bart Baesens,et al. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..
[12] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[13] Siddhartha S. Srinivasa,et al. Legibility and predictability of robot motion , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[14] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[15] Jure Leskovec,et al. Hidden factors and hidden topics: understanding rating dimensions with review text , 2013, RecSys.
[16] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[17] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[18] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[19] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[20] Cynthia Rudin,et al. The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification , 2014, NIPS.
[21] Andrea Vedaldi,et al. Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] G. Imbens,et al. Machine Learning Methods for Estimating Heterogeneous Causal Eects , 2015 .
[23] Alexander Mordvintsev,et al. Inceptionism: Going Deeper into Neural Networks , 2015 .
[24] Been Kim,et al. iBCM: Interactive Bayesian Case Model Empowering Humans via Intuitive Interaction , 2015 .
[25] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[26] Ryan P. Adams,et al. Graph-Sparse LDA: A Topic Model with Structured Sparsity , 2014, AAAI.
[27] Been Kim,et al. Interactive and interpretable machine learning models for human machine collaboration , 2015 .
[28] David C. Kale,et al. Modeling Missing Data in Clinical Time Series with RNNs , 2016 .
[29] Tom Schaul,et al. Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.
[30] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[31] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[32] Karen M. Feigh,et al. Learning From Explanations Using Sentiment and Advice in RL , 2017, IEEE Transactions on Cognitive and Developmental Systems.
[33] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[34] Zachary C. Lipton,et al. The mythos of model interpretability , 2018, Commun. ACM.