Input or Output: Effects of Explanation Focus on the Perception of Explainable Recommendation with Varying Level of Details
暂无分享,去创建一个
Mohamed Amine Chatti | Arham Muslim | Qurat Ul Ain | Mouadh Guesmi | Laura Vorgerd | Shadi Zumor | Yiqi Sun | Fangzheng Ji | Shoeb Ahmed Joarder | Thao Ngo
[1] Lise Getoor,et al. Personalized explanations for hybrid recommender systems , 2019, IUI.
[2] Weng-Keen Wong,et al. Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.
[3] René F. Kizilcec,et al. How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.
[4] Mohamed Amine Chatti,et al. On-demand Personalized Explanation for Transparent Recommendation , 2021, UMAP.
[5] Izak Benbasat,et al. Do Users Always Want to Know More? Investigating the Relationship between System Transparency and Users' Trust in Advice-Giving Systems , 2019, ECIS.
[6] Martijn Millecamp,et al. To explain or not to explain: the effects of personal characteristics when explaining music recommendations , 2019, IUI.
[7] Jacques Khalip,et al. “Open” , 2019, European Romantic Review.
[8] Li Chen,et al. A user-centric evaluation framework for recommender systems , 2011, RecSys '11.
[9] Li Chen,et al. Evaluating recommender systems from the user’s perspective: survey of the state of the art , 2012, User Modeling and User-Adapted Interaction.
[10] Filip Radlinski,et al. Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations , 2020, SIGIR.
[11] P. Alam,et al. H , 1887, High Explosives, Propellants, Pyrotechnics.
[12] Jacob Cohen. Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.
[13] Joel J Heidelbaugh. "Too Much?" , 2017, Primary care.
[14] Mohamed Amine Chatti,et al. SIMT: A Semantic Interest Modeling Toolkit , 2021, UMAP.
[15] Miss A.O. Penney. (b) , 1974, The New Yale Book of Quotations.
[16] Mohamed Amine Chatti,et al. Open, Scrutable and Explainable Interest Models for Transparent Recommendation , 2021, IUI Workshops.
[17] Peter Brusilovsky,et al. Making Educational Recommendations Transparent through a Fine-Grained Open Learner Model , 2019, IUI Workshops.
[18] Mouzhi Ge,et al. How should I explain? A comparison of different explanation types for recommender systems , 2014, Int. J. Hum. Comput. Stud..
[19] Nava Tintarev,et al. Evaluating the effectiveness of explanations for recommender systems , 2012, User Modeling and User-Adapted Interaction.
[20] Bart P. Knijnenburg,et al. Recommender Systems for Self-Actualization , 2016, RecSys.
[21] Martijn Millecamp,et al. Controlling Spotify Recommendations: Effects of Personal Characteristics on Music Recommender User Interfaces , 2018, UMAP.
[22] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[23] Izak Benbasat,et al. Transparency in Advice-Giving Systems: A Framework and a Research Model for Transparency Provision , 2019, IUI Workshops.
[24] Raymond J. Mooney,et al. Explaining Recommendations: Satisfaction vs. Promotion , 2005 .
[25] Nava Tintarev. The Effectiveness of Personalized Movie Explanations: An Experiment Using Commercial Meta-data , 2008, AH.
[26] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[27] Denis Parra,et al. Moodplay: Interactive Mood-based Music Discovery and Recommendation , 2016, UMAP.
[28] Izak Benbasat,et al. Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs , 2007, J. Manag. Inf. Syst..
[29] Filip Radlinski,et al. Transparent, Scrutable and Explainable User Models for Personalized Recommendation , 2019, SIGIR.
[30] Jeffrey Nichols,et al. System U: automatically deriving personality traits from social media for people recommendation , 2014, RecSys '14.
[31] David Graus,et al. "let me tell you who you are" - Explaining recommender systems by opening black box user profiles , 2018, FATREC2018.
[32] Bart P. Knijnenburg,et al. Explaining the user experience of recommender systems , 2012, User Modeling and User-Adapted Interaction.
[33] Judith Masthoff,et al. Designing and Evaluating Explanations for Recommender Systems , 2011, Recommender Systems Handbook.
[34] Eric D. Ragan,et al. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..
[35] Nava Tintarev,et al. Reading News with a Purpose: Explaining User Profiles for Self-Actualization , 2019, UMAP.
[36] John Riedl,et al. Explaining collaborative filtering recommendations , 2000, CSCW '00.
[37] Jodie Noel Vinson. Transparent , 2018, Stardust Media.
[38] Xu Chen,et al. Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..
[39] Judith Masthoff,et al. Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.
[40] Dietmar Jannach,et al. A systematic review and taxonomy of explanations in decision support and recommender systems , 2017, User Modeling and User-Adapted Interaction.
[41] Judith Masthoff,et al. A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.
[42] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.