Interpretable Machine Learning
暂无分享,去创建一个
[1] F. Heider,et al. An experimental study of apparent behavior , 1944 .
[2] A. Tversky,et al. The simulation heuristic , 1982 .
[3] Peter J. Rousseeuw,et al. Clustering by means of medoids , 1987 .
[4] L. Shapley. A Value for n-person Games , 1988 .
[5] Agnar Aamodt,et al. Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches , 1994, AI Commun..
[6] William W. Cohen. Fast Effective Rule Induction , 1995, ICML.
[7] R. Nickerson. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises , 1998 .
[8] R. Dennis Cook,et al. Detection of Influential Observation in Linear Regression , 2000, Technometrics.
[9] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[10] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[11] Giles Hooker,et al. Discovering additive structure in black box functions , 2004, KDD.
[12] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[13] Robert C. Holte,et al. Very Simple Classification Rules Perform Well on Most Commonly Used Datasets , 1993, Machine Learning.
[14] Bogdan E. Popescu,et al. PREDICTIVE LEARNING VIA RULE ENSEMBLES , 2008, 0811.1679.
[15] Erik Strumbelj,et al. A General Method for Visualizing and Explaining Black-Box Regression Models , 2011, ICANNGA.
[16] Johannes Fürnkranz,et al. Foundations of Rule Learning , 2012, Cognitive Technologies.
[17] Hadi Fanaee-T,et al. Event labeling combining ensemble detectors and background knowledge , 2014, Progress in Artificial Intelligence.
[18] Erik Strumbelj,et al. Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.
[19] R Core Team,et al. R: A language and environment for statistical computing. , 2014 .
[20] Foster J. Provost,et al. Explaining Data-Driven Document Classifications , 2013, MIS Q..
[21] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[22] Achim Zeileis,et al. A Toolkit for Recursive Partytioning , 2015 .
[23] Tiago A. Almeida,et al. TubeSpam: Comment Spam Filtering on YouTube , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).
[24] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[25] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[26] Scott Lundberg,et al. An unexpected unity among methods for interpreting model predictions , 2016, ArXiv.
[27] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[28] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[29] Carlos Guestrin,et al. Model-Agnostic Interpretability of Machine Learning , 2016, ArXiv.
[30] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[31] Jaime S. Cardoso,et al. Transfer Learning with Partial Observability Applied to Cervical Cancer Screening , 2017, IbPRIA.
[32] Marie-Jeanne Lesot,et al. Inverse Classification for Comparison-based Interpretability in Machine Learning , 2017, ArXiv.
[33] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[34] Ananthram Swami,et al. Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.
[35] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[36] Margo I. Seltzer,et al. Scalable Bayesian Rule Lists , 2016, ICML.
[37] Martín Abadi,et al. Adversarial Patch , 2017, ArXiv.
[38] Przemyslaw Biecek,et al. Explanations of model predictions with live and breakDown packages , 2018, R J..
[39] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[40] Cynthia Rudin,et al. Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective , 2018 .
[41] Fabio Roli,et al. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.
[42] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[43] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.
[44] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[45] Brandon M. Greenwell,et al. A Simple and Effective Model-Based Variable Importance Measure , 2018, ArXiv.
[46] Trevor Hastie,et al. Causal Interpretations of Black-Box Models , 2019, Journal of business & economic statistics : a publication of the American Statistical Association.
[47] Kouichi Sakurai,et al. One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.
[48] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[49] Daniel W. Apley,et al. Visualizing the effects of predictor variables in black box supervised learning models , 2016, Journal of the Royal Statistical Society: Series B (Statistical Methodology).