Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System
暂无分享,去创建一个
Mohamed Amine Chatti | Mouadh Guesmi | Shoeb Joarder | Qurat Ul Ain | R. Alatrash | Clara Siepmann | Hoda Ghanbarzadeh
[1] Mohamed Amine Chatti,et al. Is More Always Better? The Effects of Personal Characteristics and Level of Detail on the Perception of Explanations in a Recommender System , 2022, UMAP.
[2] Mohamed Amine Chatti,et al. Explaining User Models with Different Levels of Detail for Transparent Recommendation: A User Study , 2022, UMAP.
[3] Mohamed Amine Chatti,et al. Interactive Visualizations of Transparent User Models for Self-Actualization: A Human-Centered Design Approach , 2022, Multimodal Technol. Interact..
[4] Elena V. Epure,et al. Explainability in Music Recommender Systems , 2022, AI Mag..
[5] C. Conati,et al. “Knowing me, knowing you”: personalized explanations for a music recommender system , 2022, User Model. User Adapt. Interact..
[6] C. Conati,et al. “Knowing me, knowing you”: personalized explanations for a music recommender system , 2022, User Modeling and User-Adapted Interaction.
[7] Shin'ichi Konomi,et al. CourseQ: the impact of visual and interactive course recommendation in university environments , 2021, Res. Pract. Technol. Enhanc. Learn..
[8] Mohamed Amine Chatti,et al. On-demand Personalized Explanation for Transparent Recommendation , 2021, UMAP.
[9] Martijn Millecamp,et al. Visual, textual or hybrid: the effect of user expertise on different explanations , 2021, IUI.
[10] Filip Radlinski,et al. Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations , 2020, SIGIR.
[11] Jean Scholtz,et al. How do visual explanations foster end users' appropriate trust in machine learning? , 2020, IUI.
[12] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[13] Cristina Conati,et al. Toward personalized XAI: A case study in intelligent tutoring systems , 2019, Artif. Intell..
[14] Alejandro Barredo Arrieta,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2019, Inf. Fusion.
[15] Dietmar Jannach,et al. Explanations and User Control in Recommender Systems , 2019, ABIS@HT.
[16] Mennatallah El-Assady,et al. explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning , 2019, IEEE Transactions on Visualization and Computer Graphics.
[17] Filip Radlinski,et al. Transparent, Scrutable and Explainable User Models for Personalized Recommendation , 2019, SIGIR.
[18] Jürgen Ziegler,et al. Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems , 2019, CHI.
[19] Peter Brusilovsky,et al. Explaining recommendations in an interactive hybrid social recommender , 2019, IUI.
[20] Martijn Millecamp,et al. To explain or not to explain: the effects of personal characteristics when explaining music recommendations , 2019, IUI.
[21] Lise Getoor,et al. Personalized explanations for hybrid recommender systems , 2019, IUI.
[22] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[23] Alimohammad Shahri,et al. Four reference models for transparency requirements in information systems , 2018, Requirements Engineering.
[24] Xu Chen,et al. Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..
[25] Mike Ananny,et al. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..
[26] Dietmar Jannach,et al. A systematic review and taxonomy of explanations in decision support and recommender systems , 2017, User Modeling and User-Adapted Interaction.
[27] Nicholas Diakopoulos,et al. Algorithmic Transparency in the News Media , 2017 .
[28] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[29] Erik Duval,et al. Go With the Flow: Effects of Transparency and User Control on Targeted Advertising Using Flow Charts , 2016, AVI.
[30] René F. Kizilcec,et al. How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.
[31] Weng-Keen Wong,et al. Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.
[32] Tamara Munzner,et al. Visualization Analysis and Design , 2014, A.K. Peters visualization series.
[33] Tarek F. Abdelzaher,et al. Dynamics of human trust in recommender systems , 2014, RecSys '14.
[34] Mouzhi Ge,et al. How should I explain? A comparison of different explanation types for recommender systems , 2014, Int. J. Hum. Comput. Stud..
[35] Christoph Trattner,et al. See what you want to see: visual user-driven approach for hybrid recommendation , 2014, IUI.
[36] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[37] Anind K. Dey,et al. Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application , 2013, HCI.
[38] Tobias Höllerer,et al. LinkedVis: exploring social and semantic career recommendations , 2013, IUI '13.
[39] Nava Tintarev,et al. Evaluating the effectiveness of explanations for recommender systems , 2012, User Modeling and User-Adapted Interaction.
[40] Li Chen,et al. Evaluating recommender systems from the user’s perspective: survey of the state of the art , 2012, User Modeling and User-Adapted Interaction.
[41] Bart P. Knijnenburg,et al. Explaining the user experience of recommender systems , 2012, User Modeling and User-Adapted Interaction.
[42] Tobias Höllerer,et al. TasteWeights: a visual interactive hybrid recommender system , 2012, RecSys.
[43] John Riedl,et al. Recommender systems: from algorithms to user experience , 2012, User Modeling and User-Adapted Interaction.
[44] Li Chen,et al. A user-centric evaluation framework for recommender systems , 2011, RecSys '11.
[45] Tobias Höllerer,et al. SmallWorlds: Visualizing Social Recommendations , 2010, Comput. Graph. Forum.
[46] Anind K. Dey,et al. Assessing demand for intelligibility in context-aware applications , 2009, UbiComp.
[47] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[48] John Riedl,et al. Tagsplanations: explaining recommendations using tags , 2009, IUI.
[49] Lora Aroyo,et al. The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.
[50] Barry Smyth,et al. PeerChooser: visual interactive recommendation , 2008, CHI.
[51] Judith Masthoff,et al. A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.
[52] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[53] Peter Brusilovsky. User Modeling and User-Adapted Interaction , 1994, User Modeling and User-Adapted Interaction.
[54] Q. Ain,et al. A Multi-Dimensional Conceptualization Framework for Personalized Explanations in Recommender Systems 11-23 , 2022, IUI Workshops.
[55] Izak Benbasat,et al. Do Users Always Want to Know More? Investigating the Relationship between System Transparency and Users' Trust in Advice-Giving Systems , 2019, ECIS.
[56] Cristina Conati,et al. Exploring the Need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS) , 2019, IUI Workshops.
[57] Olfa Nasraoui,et al. Mining Semantic Knowledge Graphs to Add Explainability to Black Box Recommender Systems , 2019, IEEE Access.
[58] Qian Yang,et al. Why these Explanations? Selecting Intelligibility Types for Explanation Goals , 2019, IUI Workshops.
[59] Judith Masthoff,et al. Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.
[60] D. Norman. The Design of Everyday Things: Revised and Expanded Edition , 2013 .
[61] V. Braun,et al. Please Scroll down for Article Qualitative Research in Psychology Using Thematic Analysis in Psychology , 2022 .