Vis Ex Machina: An Analysis of Trust in Human versus Algorithmically Generated Visualization Recommendations

More visualization systems are simplifying the data analysis process by automatically suggesting relevant visualizations. However, little work has been done to understand if users trust these automated recommendations. In this paper, we present the results of a crowd-sourced study exploring preferences and perceived quality of recommendations that have been positioned as either human-curated or algorithmically generated. We observe that while participants initially prefer human recommenders, their actions suggest an indifference for recommendation source when evaluating visualization recommendations. The relevance of presented information (e.g., the presence of certain data fields) was the most critical factor, followed by a belief in the recommender’s ability to create accurate visualizations. Our findings suggest a general indifference towards the provenance of recommendations, and point to idiosyncratic definitions of visualization quality and trustworthiness that may not be captured by simple measures. We suggest that recommendation systems should be tailored to the information-foraging strategies of specific users.

[1]  A. Acquisti,et al.  Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research , 2016 .

[2]  Randolph G. Bias,et al.  Research Methods for Human-Computer Interaction , 2010, J. Assoc. Inf. Sci. Technol..

[3]  Pat Hanrahan,et al.  Show Me: Automatic Presentation for Visual Analysis , 2007, IEEE Transactions on Visualization and Computer Graphics.

[4]  D. Lehmann,et al.  Let the Machine Decide: When Consumers Trust or Distrust Algorithms , 2019, NIM Marketing Intelligence Review.

[5]  Weiwei Cui,et al.  Retrieve-Then-Adapt: Example-based Automatic Generation for Proportion-related Infographics , 2020, IEEE Transactions on Visualization and Computer Graphics.

[6]  Don A. Moore,et al.  Algorithm Appreciation: People Prefer Algorithmic To Human Judgment , 2018, Organizational Behavior and Human Decision Processes.

[7]  Alex Endert,et al.  Augmenting Visualizations with Interactive Data Facts to Facilitate Interpretation and Communication , 2019, IEEE Transactions on Visualization and Computer Graphics.

[8]  Patric R. Spence,et al.  Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter , 2014, Comput. Hum. Behav..

[9]  Min Chen,et al.  An Information-theoretic Framework for Visualization , 2010, IEEE Transactions on Visualization and Computer Graphics.

[10]  Min Kyung Lee Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..

[11]  Daniel B. Shank Are computers good or bad for business? How mediated customer-computer interaction alters emotions, impressions, and patronage toward organizations , 2013, Comput. Hum. Behav..

[12]  Michael Correll,et al.  Ethical Dimensions of Visualization Research , 2018, CHI.

[13]  Jean Scholtz,et al.  How do visual explanations foster end users' appropriate trust in machine learning? , 2020, IUI.

[14]  Xiao Ma,et al.  AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness , 2019, CHI.

[15]  Matteo Golfarelli,et al.  A model-driven approach to automate data visualization in big data analytics , 2019, Inf. Vis..

[16]  Evan M. Peck,et al.  Data is Personal: Attitudes and Perceptions of Data Visualization in Rural Pennsylvania , 2019, CHI.

[17]  Çagatay Demiralp,et al.  Data2Vis: Automatic Generation of Data Visualizations Using Sequence-to-Sequence Recurrent Neural Networks , 2018, IEEE Computer Graphics and Applications.

[18]  Aditya G. Parameswaran,et al.  SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics , 2015, Proc. VLDB Endow..

[19]  Jeffrey Heer,et al.  Agency plus automation: Designing artificial intelligence into interactive systems , 2019, Proceedings of the National Academy of Sciences.

[20]  Kanit Wongsuphasawat,et al.  Voyager 2: Augmenting Visual Analysis with Partial View Specifications , 2017, CHI.

[21]  Huamin Qu,et al.  DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models , 2020, IEEE Transactions on Visualization and Computer Graphics.

[22]  Jeffrey Heer,et al.  Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco , 2018, IEEE Transactions on Visualization and Computer Graphics.

[23]  Deborah Lee,et al.  I Trust It, but I Don’t Know Why , 2013, Hum. Factors.

[24]  Kanit Wongsuphasawat,et al.  Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations , 2016, IEEE Transactions on Visualization and Computer Graphics.

[25]  René F. Kizilcec How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.

[26]  Yang Shi,et al.  Calliope: Automatic Visual Data Story Generation from a Spreadsheet , 2020, IEEE Transactions on Visualization and Computer Graphics.

[27]  Ming Yin,et al.  Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.

[28]  Chris Cornelis,et al.  Trust and Recommendations , 2011, Recommender Systems Handbook.

[29]  Andrew Vande Moere,et al.  On the role of design in information visualization , 2011, Inf. Vis..

[30]  Deirdre K. Mulligan,et al.  Contestability in Algorithmic Systems , 2019, CSCW Companion.