When people and algorithms meet: user-reported problems in intelligent everyday applications

The complex nature of intelligent systems motivates work on supporting users during interaction, for example through explanations. However, there is yet little empirical evidence on specific problems users face in such systems in everyday use. This paper investigates such problems as reported by users: We analysed 35,448 reviews of three apps on the Google Play Store (Facebook, Netflix and Google Maps) with sentiment analysis and topic modelling to reveal problems during interaction that can be attributed to the apps' algorithmic decision-making. We enriched this data with users' coping and support strategies through a follow-up online survey (N=286). In particular, we found problems and strategies related to content, algorithm, user choice, and feedback. We discuss corresponding implications for designing user support, highlighting the importance of user control and explanations of output, not processes. Our work thus contributes empirical evidence to facilitate understanding of users' everyday problems with intelligent systems.

[1]  Lior Rokach,et al.  Recommender Systems: Introduction and Challenges , 2015, Recommender Systems Handbook.

[2]  Weng-Keen Wong,et al.  Principles of Explanatory Debugging to Personalize Interactive Machine Learning , 2015, IUI.

[3]  Andrea Bunt,et al.  Are explanations always important?: a study of deployed, low-cost intelligent interactive systems , 2012, IUI '12.

[4]  Berkeley J. Dietvorst,et al.  Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err , 2014, Journal of experimental psychology. General.

[5]  René F. Kizilcec,et al.  How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.

[6]  Gregory D. Hager,et al.  Advances in Artificial Intelligence Require Progress Across all of Computer Science , 2017, ArXiv.

[7]  Anind K. Dey,et al.  Design of an intelligible mobile context-aware application , 2011, Mobile HCI.

[8]  Karrie Karahalios,et al.  A path to understanding the effects of algorithm awareness , 2014, CHI Extended Abstracts.

[9]  Walid Maalej,et al.  On the automatic classification of app reviews , 2016, Requirements Engineering.

[10]  Li Chen,et al.  Trust building with explanation interfaces , 2006, IUI '06.

[11]  Sean A. Munson,et al.  Presenting diverse political opinions: how and how much , 2010, CHI.

[12]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[13]  CARLOS A. GOMEZ-URIBE,et al.  The Netflix Recommender System , 2015, ACM Trans. Manag. Inf. Syst..

[14]  Weng-Keen Wong,et al.  Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.

[15]  John Riedl,et al.  Introduction to the Transactions on Interactive Intelligent Systems , 2011, ACM Trans. Interact. Intell. Syst..

[16]  Pablo J. Boczkowski,et al.  The Relevance of Algorithms , 2013 .

[17]  Lora Aroyo,et al.  The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.

[18]  Nicholas Diakopoulos,et al.  Accountability in algorithmic decision making , 2016, Commun. ACM.

[19]  Walid Maalej,et al.  How Do Users Like This Feature? A Fine Grained Sentiment Analysis of App Reviews , 2014, 2014 IEEE 22nd International Requirements Engineering Conference (RE).

[20]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[21]  Advait Sarkar,et al.  Confidence, command, complexity: metamodels for structured interaction with machine intelligence , 2015, PPIG.

[22]  Karrie Karahalios,et al.  First I "like" it, then I hide it: Folk Theories of Social Feeds , 2016, CHI.

[23]  Michele Willson,et al.  Algorithms (and the) everyday , 2017, The Social Power of Algorithms.

[24]  Maya Cakmak,et al.  Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..

[25]  Vanessa Evers,et al.  The effects of transparency on perceived and actual competence of a content-based recommender , 2008 .

[26]  Rebecca Gray,et al.  Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed , 2015, CHI.

[27]  Mohan S. Kankanhalli,et al.  Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.

[28]  Emilee J. Rader,et al.  Explanations as Mechanisms for Supporting Algorithmic Transparency , 2018, CHI.

[29]  Judy Kay,et al.  Creating personalized systems that people can scrutinize and control: Drivers, principles and experience , 2012, TIIS.

[30]  Per Ola Kristensson,et al.  A Review of User Interface Design for Interactive Machine Learning , 2018, ACM Trans. Interact. Intell. Syst..

[31]  Mark Bilandzic,et al.  Bringing Transparency Design into Practice , 2018, IUI.

[32]  Taina Bucher,et al.  The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms , 2017, The Social Power of Algorithms.

[33]  Annika Wærn,et al.  Towards Algorithmic Experience: Initial Efforts for Social Media Contexts , 2018, CHI.

[34]  Shagun Jhaver,et al.  Algorithmic Anxiety and Coping Strategies of Airbnb Hosts , 2018, CHI.

[35]  M. Chalmers,et al.  Seamful and Seamless Design in Ubiquitous Computing , 2003 .

[36]  K. Foot,et al.  Media Technologies: Essays on Communication, Materiality, and Society , 2014 .