暂无分享,去创建一个
Doug Downey | Kyle Lo | Allen Institute for Artificial Intelligence | Marissa Radensky | Zoran Popovi'c | Daniel S. Weld University of Washington | Northwestern University | Doug Downey | Kyle Lo | Marissa Radensky | Northwestern University | Z. Popovi'c
[1] John Riedl,et al. The Tag Genome: Encoding Community Knowledge to Support Novel Interaction , 2012, TIIS.
[2] Alexander Felfernig,et al. An Empirical Study on Consumer Behavior in the Interaction with Knowledge-based Recommender Applications , 2006, The 8th IEEE International Conference on E-Commerce Technology and The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services (CEC/EEE'06).
[3] Alfred Kobsa,et al. Inspectability and control in social recommenders , 2012, RecSys.
[4] Aniket Kittur,et al. SearchLens: composing and capturing complex user interests for exploratory search , 2019, IUI.
[5] Yashar Mehdad,et al. Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA , 2020, ArXiv.
[6] Peter Brusilovsky,et al. Open user profiles for adaptive news systems: help or harm? , 2007, WWW '07.
[7] Ofra Amir,et al. Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps , 2020, Artif. Intell..
[8] Doug Downey,et al. Explanation-Based Tuning of Opaque Machine Learners with Application to Paper Recommendation , 2020, ArXiv.
[9] Gregorio Convertino,et al. What data should I protect?: recommender and planning support for data security analysts , 2019, IUI.
[10] Franco Turini,et al. Factual and Counterfactual Explanations for Black Box Decision Making , 2019, IEEE Intelligent Systems.
[11] Martijn Millecamp,et al. Visual, textual or hybrid: the effect of user expertise on different explanations , 2021, IUI.
[12] Martijn Millecamp,et al. To explain or not to explain: the effects of personal characteristics when explaining music recommendations , 2019, IUI.
[13] Barry Smyth,et al. PeerChooser: visual interactive recommendation , 2008, CHI.
[14] Lise Getoor,et al. Personalized explanations for hybrid recommender systems , 2019, IUI.
[15] Dorota Glowacka,et al. Improving Controllability and Predictability of Interactive Recommendation Interfaces for Exploratory Search , 2015, IUI.
[16] Lee Lacy,et al. Defense Advanced Research Projects Agency (DARPA) Agent Markup Language Computer Aided Knowledge Acquisition , 2005 .
[17] Carlo Tasso,et al. Personalized Access to Scientific Publications: from Recommendation to Explanation , 2013, UMAP.
[18] Birgitta König-Ries,et al. An approach to controlling user models and personalization effects in recommender systems , 2013, IUI '13.
[19] Ilia Stepin,et al. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence , 2021, IEEE Access.
[20] Raymond Fok,et al. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance , 2020, CHI.
[21] Tobias Höllerer,et al. TasteWeights: a visual interactive hybrid recommender system , 2012, RecSys.
[22] Peter Brusilovsky,et al. Providing Control and Transparency in a Social Recommender System for Academic Conferences , 2017, UMAP.
[23] Jasper van der Waa,et al. Evaluating XAI: A comparison of rule-based and example-based explanations , 2021, Artif. Intell..
[24] André Calero Valdez,et al. What Should I Read Next? A Personalized Visual Publication Recommender System , 2015, HCI.
[25] Todd Kulesza,et al. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.
[26] Gregor Stiglic,et al. Local vs. Global Interpretability of Machine Learning Models in Type 2 Diabetes Mellitus Screening , 2019, KR4HC/ProHealth/TEAAM@AIME.
[27] Mireia Ribera,et al. Can we do better explanations? A proposal of user-centered explainable AI , 2019, IUI Workshops.
[28] Peter Brusilovsky,et al. User-controllable personalization: A case study with SetFusion , 2015, Int. J. Hum. Comput. Stud..
[29] Steven M. Drucker,et al. Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.
[30] Peter Brusilovsky,et al. Explaining recommendations in an interactive hybrid social recommender , 2019, IUI.
[31] Alun D. Preece,et al. Stakeholders in Explainable AI , 2018, ArXiv.
[32] Tobias Höllerer,et al. LinkedVis: exploring social and semantic career recommendations , 2013, IUI '13.
[33] James Michaelis,et al. I can do better than your AI: expertise and explanations , 2019, IUI.
[34] Lora Aroyo,et al. The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.
[35] Trevor Darrell,et al. Generating Counterfactual Explanations with Natural Language , 2018, ICML 2018.
[36] Ivania Donoso-Guzmán,et al. The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images , 2019, IUI.
[37] Jeffrey M. Rzeszotarski,et al. Crowdsourcing and Evaluating Concept-driven Explanations of Machine Learning Models , 2021, Proc. ACM Hum. Comput. Interact..
[38] Mohit Bansal,et al. Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? , 2020, ACL.
[39] Bart Baesens,et al. Rule Extraction from Support Vector Machines: An Overview of Issues and Application in Credit Scoring , 2008, Rule Extraction from Support Vector Machines.
[40] Tobias Höllerer,et al. SmallWorlds: Visualizing Social Recommendations , 2010, Comput. Graph. Forum.
[41] Jacob Andreas,et al. Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction , 2020, ArXiv.
[42] Katrien Verbert,et al. Effects of personal characteristics on music recommender systems with different levels of controllability , 2018, RecSys.
[43] Gerhard Friedrich,et al. A Taxonomy for Generating Explanations in Recommender Systems , 2011, AI Mag..
[44] Tobias Höllerer,et al. TopicLens : An Interactive Recommender System based on Topical and Social Connections , 2012 .
[45] Rachel K. E. Bellamy,et al. Explaining models an empirical study of how explanations impact fairness judgment , 2019 .
[46] Maia L. Jacobs,et al. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection , 2021, Translational Psychiatry.
[47] Eric D. Ragan,et al. A Survey of Evaluation Methods and Measures for Interpretable Machine Learning , 2018, ArXiv.
[48] Yunfeng Zhang,et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.
[49] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[50] Peter Brusilovsky,et al. The effects of controllability and explainability in a social recommender system , 2020, User Modeling and User-Adapted Interaction.
[51] Melanie Mitchell,et al. Interpreting individual classifications of hierarchical networks , 2013, 2013 IEEE Symposium on Computational Intelligence and Data Mining (CIDM).
[52] Franco Turini,et al. Local Rule-Based Explanations of Black Box Decision Systems , 2018, ArXiv.
[53] Bart P. Knijnenburg,et al. Each to his own: how different users call for different interaction methods in recommender systems , 2011, RecSys '11.
[54] John D. Lee,et al. Trust in Automation: Designing for Appropriate Reliance , 2004, Hum. Factors.
[55] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[56] Tobias Höllerer,et al. Hypothetical Recommendation: A Study of Interactive Profile Manipulation Behavior for Recommender Systems , 2015, FLAIRS.
[57] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.