暂无分享,去创建一个
Emily Chen | Finale Doshi-Velez | Been Kim | Sam Gershman | Jeffrey He | Menaka Narayanan | S. Gershman | Finale Doshi-Velez | Been Kim | M. Narayanan | Emily Chen | Jeffrey He | F. Doshi-Velez
[1] F. Keil,et al. Explanation and understanding , 2015 .
[2] T. Lombrozo,et al. Simplicity and probability in causal explanation , 2007, Cognitive Psychology.
[3] Ferat Sahin,et al. A survey on feature selection methods , 2014, Comput. Electr. Eng..
[4] Bart Baesens,et al. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..
[5] Michael C. Hughes,et al. Supervised topic models for clinical interpretability , 2016, 1612.01678.
[6] Ian H. Witten,et al. Generating Accurate Rule Sets Without Global Optimization , 1998, ICML.
[7] Ramprasaath R. Selvaraju,et al. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .
[8] Stephen Muggleton,et al. Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited , 2013, Machine Learning.
[9] G. A. Miller. THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .
[10] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[11] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[12] Finale Doshi-Velez,et al. Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models , 2016, ArXiv.
[13] Bernease Herman,et al. The Promise and Peril of Human Evaluation for Model Interpretability , 2017, ArXiv.
[14] Delbert Dueck,et al. Clustering by Passing Messages Between Data Points , 2007, Science.
[15] Finale Doshi-Velez,et al. Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction , 2015, NIPS.
[16] Raymond J. Mooney,et al. Explaining Recommendations: Satisfaction vs. Promotion , 2005 .
[17] D. Goldstein,et al. Simple Rules for Complex Decisions , 2017, 1702.04690.
[18] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[19] Cynthia Rudin,et al. Supersparse linear integer models for optimized medical scoring systems , 2015, Machine Learning.
[20] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[21] Girish H. Subramanian,et al. A comparison of the decision table and tree , 1992, CACM.
[22] Stephen Muggleton,et al. How Does Predicate Invention Affect Human Comprehensibility? , 2016, ILP.
[23] Avanti Shrikumar,et al. Not Just A Black Box : Interpretable Deep Learning by Propagating Activation Differences , 2016 .
[24] Judith Masthoff,et al. Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.
[25] Ryan P. Adams,et al. Graph-Sparse LDA: A Topic Model with Structured Sparsity , 2014, AAAI.
[26] Weiwei Liu,et al. Sparse Perceptron Decision Tree for Millions of Dimensions , 2016, AAAI.
[27] William W. Cohen. Fast Effective Rule Induction , 1995, ICML.
[28] Cynthia Rudin,et al. The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification , 2014, NIPS.
[29] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[30] Klaus-Robert Müller,et al. PatternNet and PatternLRP - Improving the interpretability of neural networks , 2017, ArXiv.
[31] John Riedl,et al. Is seeing believing?: how recommender system interfaces affect users' opinions , 2003, CHI '03.
[32] Cynthia Rudin,et al. Bayesian Rule Sets for Interpretable Classification , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).
[33] T. Lombrozo. The structure and function of explanations , 2006, Trends in Cognitive Sciences.
[34] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[35] Wei-Yin Loh,et al. Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..
[36] Tahir Mehmood,et al. A review of variable selection methods in Partial Least Squares Regression , 2012 .
[37] Mirco Musolesi,et al. Interpretable Machine Learning for Mobile Notification Management: An Overview of PrefMiner , 2017, GETMBL.
[38] Maya R. Gupta,et al. Fast and Flexible Monotonic Functions with Ensembles of Lattices , 2016, NIPS.
[39] Carlos Guestrin,et al. Programs as Black-Box Explanations , 2016, ArXiv.
[40] Dympna O'Sullivan,et al. The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.
[41] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[42] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[43] Skipper Seabold,et al. Statsmodels: Econometric and Statistical Modeling with Python , 2010, SciPy.
[44] Niklas Lavesson,et al. User-oriented Assessment of Classification Model Understandability , 2011, SCAI.
[45] Been Kim,et al. iBCM: Interactive Bayesian Case Model Empowering Humans via Intuitive Interaction , 2015 .
[46] Cynthia Rudin,et al. Falling Rule Lists , 2014, AISTATS.
[47] Alex Alves Freitas,et al. Comprehensible classification models: a position paper , 2014, SKDD.
[48] Peter Clark,et al. Rule Induction with CN2: Some Recent Improvements , 1991, EWSL.
[49] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[50] Finale Doshi-Velez,et al. A Roadmap for a Rigorous Science of Interpretability , 2017, ArXiv.
[51] David A. Landgrebe,et al. A survey of decision tree classifier methodology , 1991, IEEE Trans. Syst. Man Cybern..
[52] D. Kahneman. Thinking, Fast and Slow , 2011 .
[53] Douglas S. Bell,et al. Interface design principles for usable decision support: A targeted review of best practices for clinical prescribing interventions , 2012, J. Biomed. Informatics.
[54] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[55] R. Rivest. Learning Decision Lists , 1987, Machine Learning.
[56] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[57] Suresh Venkatasubramanian,et al. Auditing Black-Box Models for Indirect Influence , 2016, ICDM.
[58] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[59] Donald Michie,et al. Machine Learning in the Next Five Years , 1988, EWSL.
[60] Paul Raccuglia,et al. Machine-learning-assisted materials discovery using failed experiments , 2016, Nature.
[61] Tapio Elomaa,et al. In Defense of C4.5: Notes in Learning One-Level Decision Trees , 1994, ICML.