Personalized Explanation for Machine Learning: a Conceptualization
暂无分享,去创建一个
[1] Hsinchun Chen,et al. Web mining: Machine learning for web applications , 2005, Annu. Rev. Inf. Sci. Technol..
[2] Guokun Lai,et al. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis , 2014, SIGIR.
[3] Johannes Schneider,et al. Mining Sequences of Developer Interactions in Visual Studio for Usage Smells , 2017, IEEE Transactions on Software Engineering.
[4] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[5] Richard T. Watson,et al. Analyzing the Past to Prepare for the Future: Writing a Literature Review , 2002, MIS Q..
[6] Alun D. Preece,et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.
[7] Nada Lavrac,et al. Selected techniques for data mining in medicine , 1999, Artif. Intell. Medicine.
[8] Samuel J. Gershman,et al. Human-in-the-Loop Interpretability Prior , 2018, NeurIPS.
[9] Quanshi Zhang,et al. Interpretable Convolutional Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[10] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[11] Brad Boehmke,et al. Interpretable Machine Learning , 2019 .
[12] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[13] Marshall Scott Poole,et al. What Is Personalization? Perspectives on the Design and Implementation of Personalization in Information Systems , 2006, J. Organ. Comput. Electron. Commer..
[14] Mike Wu,et al. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability , 2017, AAAI.
[15] Mayuram S. Krishnan,et al. The Personalization Privacy Paradox: An Empirical Evaluation of Information Transparency and the Willingness to be Profiled Online for Personalization , 2006, MIS Q..
[16] Yang Wang,et al. Personalization and privacy: a survey of privacy risks and remedies in personalization-based systems , 2012, User Modeling and User-Adapted Interaction.
[17] Juan A. Recio-García,et al. Make it personal: A social explanation system applied to group recommendations , 2017, Expert Syst. Appl..
[18] Mary Beth Rosson,et al. The personalization privacy paradox: An exploratory study of decision making process for location-aware marketing , 2011, Decis. Support Syst..
[19] Maya Cakmak,et al. Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..
[20] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[21] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[22] Y. I. Liou,et al. Knowledge acquisition: issues, techniques and methodology , 1992, DATB.
[23] William R. King,et al. Understanding the Role and Methods of Meta-Analysis in IS Research , 2005, Commun. Assoc. Inf. Syst..
[24] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[25] Bart Baesens,et al. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..
[26] Alexandra Kirsch,et al. Explain to whom? Putting the User in the Center of Explainable AI , 2017, CEx@AI*IA.
[27] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[28] Josep Lluís de la Rosa i Esteva,et al. A Taxonomy of Recommender Agents on the Internet , 2003, Artificial Intelligence Review.
[29] Peter A. Flach,et al. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant , 2018, IJCAI.
[30] Y. de Montjoye,et al. Unique in the shopping mall: On the reidentifiability of credit card metadata , 2015, Science.
[31] Xu Chen,et al. Visually Explainable Recommendation , 2018, ArXiv.
[32] Heng-Tze Cheng,et al. Wide & Deep Learning for Recommender Systems , 2016, DLRS@RecSys.
[33] Franca Garzotto,et al. Investigating the Persuasion Potential of Recommender Systems from a Quality Perspective: An Empirical Study , 2012, TIIS.
[34] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[35] Jerry Alan Fails,et al. Interactive machine learning , 2003, IUI '03.
[36] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[37] Geoffrey I. Webb,et al. # 2001 Kluwer Academic Publishers. Printed in the Netherlands. Machine Learning for User Modeling , 1999 .
[38] Arvind Satyanarayan,et al. The Building Blocks of Interpretability , 2018 .
[39] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[40] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[41] Dhruv Batra,et al. Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions? , 2016, EMNLP.
[42] Izak Benbasat,et al. Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice , 1999, MIS Q..
[43] Johannes Fürnkranz,et al. On Cognitive Preferences and the Interpretability of Rule-based Models , 2018, ArXiv.
[44] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[45] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[46] Fred G. W. C. Paas,et al. The Efficiency of Instructional Conditions: An Approach to Combine Mental Effort and Performance Measures , 1992 .
[47] Jure Leskovec,et al. Interpretable & Explorable Approximations of Black Box Models , 2017, ArXiv.
[48] Marcel van Gerven,et al. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges , 2018, ArXiv.
[49] Cynthia Rudin,et al. Bayesian Rule Sets for Interpretable Classification , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).
[50] M. Bouaziz,et al. An Introduction to Computer Security , 2012 .
[51] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[52] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[53] F. Maxwell Harper,et al. Crowd-Based Personalized Natural Language Explanations for Recommendations , 2016, RecSys.
[54] Filip Karlo Dosilovic,et al. Explainable artificial intelligence: A survey , 2018, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).
[55] Daniel G. Goldstein,et al. Manipulating and Measuring Model Interpretability , 2018, CHI.
[56] Reuben Binns,et al. Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.
[57] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[58] Marco Basaldella,et al. Crowdsourcing Relevance Assessments: The Unexpected Benefits of Limiting the Time to Judge , 2016, HCOMP.
[59] Cordelia Schmid,et al. Learning object class detectors from weakly annotated video , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[60] Peter McGeorge,et al. The uses of ‘contrived’ knowledge elicitation techniques , 1992 .
[61] David Eckhoff,et al. Metrics : a Systematic Survey , 2018 .
[62] Heiko Paulheim,et al. Generating Possible Interpretations for Statistics from Linked Open Data , 2012, ESWC.
[63] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[64] Cynthia Rudin,et al. Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions , 2017, AAAI.
[65] Jan vom Brocke,et al. Identifying Preferences through mouse cursor movements - Preliminary Evidence , 2017, ECIS.
[66] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[67] Amit Dhurandhar,et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.
[68] Xu Chen,et al. Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..
[69] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[70] N. Shadbolt,et al. Eliciting Knowledge from Experts: A Methodological Analysis , 1995 .
[71] Ricardo Buettner,et al. Cognitive Workload of Humans Using Artificial Intelligence Systems: Towards Objective Measurement Applying Eye-Tracking Technology , 2013, KI.
[72] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[73] Li Chen,et al. Survey of Preference Elicitation Methods , 2004 .
[74] Jeff Tian,et al. Improving Web Navigation Usability by Comparing Actual and Anticipated Usage , 2015, IEEE Transactions on Human-Machine Systems.