Vis Ex Machina: An Analysis of Trust in Human versus Algorithmically Generated Visualization Recommendations

More visualization systems are simplifying the data analysis process by automatically suggesting relevant visualizations. However, little work has been done to understand if users trust these automated recommendations. In this paper, we present the results of a crowd-sourced study exploring preferences and perceived quality of recommendations that have been positioned as either humancurated or algorithmically generated. We observe that while participants initially prefer human recommenders, their actions suggest an indifference for recommendation source when evaluating visualization recommendations. The relevance of presented information (e.g., the presence of certain data fields) was the most critical factor, followed by a belief in the recommender’s ability to create accurate visualizations. Our findings suggest a general indifference towards the provenance of recommendations, and point to idiosyncratic definitions of visualization quality and trustworthiness that may not be captured by simple measures. We suggest that recommendation systems should be tailored to the information-foraging strategies of specific users. ∗Both authors contributed equally to this research. CCS CONCEPTS • Human-centered computing → Empirical studies in visualization; Visualization design and evaluation methods.

[1]  Jeffrey Heer,et al.  Agency plus automation: Designing artificial intelligence into interactive systems , 2019, Proceedings of the National Academy of Sciences.

[2]  Kanit Wongsuphasawat,et al.  Voyager: Exploratory Analysis via Faceted Browsing of Visualization Recommendations , 2016, IEEE Transactions on Visualization and Computer Graphics.

[3]  Weiwei Cui,et al.  Retrieve-Then-Adapt: Example-based Automatic Generation for Proportion-related Infographics , 2020, IEEE Transactions on Visualization and Computer Graphics.

[4]  Andrew Vande Moere,et al.  On the role of design in information visualization , 2011, Inf. Vis..

[5]  Michael Correll,et al.  Ethical Dimensions of Visualization Research , 2018, CHI.

[6]  Don A. Moore,et al.  Algorithm Appreciation: People Prefer Algorithmic To Human Judgment , 2018, Organizational Behavior and Human Decision Processes.

[7]  Alex Endert,et al.  Augmenting Visualizations with Interactive Data Facts to Facilitate Interpretation and Communication , 2019, IEEE Transactions on Visualization and Computer Graphics.

[8]  Xiao Ma,et al.  AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness , 2019, CHI.

[9]  Ming Yin,et al.  Understanding the Effect of Accuracy on Trust in Machine Learning Models , 2019, CHI.

[10]  Deborah Lee,et al.  I Trust It, but I Don’t Know Why , 2013, Hum. Factors.

[11]  René F. Kizilcec How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.

[12]  Jean Scholtz,et al.  How do visual explanations foster end users' appropriate trust in machine learning? , 2020, IUI.

[13]  Pat Hanrahan,et al.  Show Me: Automatic Presentation for Visual Analysis , 2007, IEEE Transactions on Visualization and Computer Graphics.

[14]  Jeffrey Heer,et al.  Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco , 2018, IEEE Transactions on Visualization and Computer Graphics.

[15]  Çagatay Demiralp,et al.  Data2Vis: Automatic Generation of Data Visualizations Using Sequence-to-Sequence Recurrent Neural Networks , 2018, IEEE Computer Graphics and Applications.

[16]  Deirdre K. Mulligan,et al.  Contestability in Algorithmic Systems , 2019, CSCW Companion.

[17]  A. Acquisti,et al.  Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research , 2016 .

[18]  Min Kyung Lee Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..

[19]  Huamin Qu,et al.  DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models , 2020, IEEE Transactions on Visualization and Computer Graphics.

[20]  Yang Shi,et al.  Calliope: Automatic Visual Data Story Generation from a Spreadsheet , 2020, IEEE Transactions on Visualization and Computer Graphics.

[21]  Chris Cornelis,et al.  Trust and Recommendations , 2011, Recommender Systems Handbook.

[22]  Aditya G. Parameswaran,et al.  SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics , 2015, Proc. VLDB Endow..

[23]  Kanit Wongsuphasawat,et al.  Voyager 2: Augmenting Visual Analysis with Partial View Specifications , 2017, CHI.

[24]  Matteo Golfarelli,et al.  A model-driven approach to automate data visualization in big data analytics , 2019, Inf. Vis..

[25]  Patric R. Spence,et al.  Is that a bot running the social media feed? Testing the differences in perceptions of communication quality for a human agent and a bot agent on Twitter , 2014, Comput. Hum. Behav..

[26]  Min Chen,et al.  An Information-theoretic Framework for Visualization , 2010, IEEE Transactions on Visualization and Computer Graphics.

[27]  Daniel B. Shank Are computers good or bad for business? How mediated customer-computer interaction alters emotions, impressions, and patronage toward organizations , 2013, Comput. Hum. Behav..

[28]  Evan M. Peck,et al.  Data is Personal: Attitudes and Perceptions of Data Visualization in Rural Pennsylvania , 2019, CHI.

[29]  D. Lehmann,et al.  Let the Machine Decide: When Consumers Trust or Distrust Algorithms , 2019, NIM Marketing Intelligence Review.