A Case Study Using Visualization Interaction Logs and Insight Metrics to Understand How Analysts Arrive at Insights

We present results from an experiment aimed at using logs of interactions with a visual analytics application to better understand how interactions lead to insight generation. We performed an insight-based user study of a visual analytics application and ran post hoc quantitative analyses of participants' measured insight metrics and interaction logs. The quantitative analyses identified features of interaction that were correlated with insight characteristics, and we confirmed these findings using a qualitative analysis of video captured during the user study. Results of the experiment include design guidelines for the visual analytics application aimed at supporting insight generation. Furthermore, we demonstrated an analysis method using interaction logs that identified which interaction patterns led to insights, going beyond insight-based evaluations that only quantify insight characteristics. We also discuss choices and pitfalls encountered when applying this analysis method, such as the benefits and costs of applying an abstraction framework to application-specific actions before further analysis. Our method can be applied to evaluations of other visualization tools to inform the design of insight-promoting interactions and to better understand analyst behaviors.

[1]  Chris North,et al.  A comparison of benchmark task and insight evaluation methods for information visualization , 2011, Inf. Vis..

[2]  Eric Horvitz,et al.  The Lumière Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users , 1998, UAI.

[3]  John T. Stasko,et al.  Evaluating visual analytics systems for investigative analysis: Deriving design principles from a case study , 2009, 2009 IEEE Symposium on Visual Analytics Science and Technology.

[4]  Jean-Daniel Fekete,et al.  This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS , 2022 .

[5]  Jie Lu,et al.  HARVEST: an intelligent visual analytic tool for the masses , 2010 .

[6]  John T. Stasko,et al.  Low-level components of analytic activity in information visualization , 2005, IEEE Symposium on Information Visualization, 2005. INFOVIS 2005..

[7]  Stuart K. Card,et al.  The cost structure of sensemaking , 1993, INTERCHI.

[8]  Jeffrey Heer,et al.  Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation , 2008, IEEE Transactions on Visualization and Computer Graphics.

[9]  J. Evans Straightforward Statistics for the Behavioral Sciences , 1995 .

[10]  David H. Laidlaw,et al.  Modeling task performance for a crowd of users from interaction histories , 2012, CHI.

[11]  John T. Stasko,et al.  Toward a Deeper Understanding of the Role of Interaction in Information Visualization , 2007, IEEE Transactions on Visualization and Computer Graphics.

[12]  Michelle X. Zhou,et al.  Characterizing users’ visual analytic activity for insight provenance , 2008, 2008 IEEE Symposium on Visual Analytics Science and Technology.

[13]  William Ribarsky,et al.  Recovering Reasoning Processes from User Interactions , 2009, IEEE Computer Graphics and Applications.

[14]  John T. Stasko,et al.  Understanding and characterizing insights: how do people gain insights using information visualization? , 2008, BELIV.

[15]  P. Pirolli,et al.  The Sensemaking Process and Leverage Points for Analyst Technology as Identified Through Cognitive Task Analysis , 2007 .

[16]  Chris North,et al.  Toward measuring visualization insight , 2006, IEEE Computer Graphics and Applications.

[17]  Chris North,et al.  An insight-based methodology for evaluating bioinformatics visualizations , 2005, IEEE Transactions on Visualization and Computer Graphics.

[18]  Jeffrey Heer,et al.  The Effects of Interactive Latency on Exploratory Visual Analysis , 2014, IEEE Transactions on Visualization and Computer Graphics.

[19]  Susan T. Dumais,et al.  Improving Web Search Ranking by Incorporating User Behavior Information , 2019, SIGIR Forum.

[20]  Alex Endert,et al.  Finding Waldo: Learning about Users from their Interactions , 2014, IEEE Transactions on Visualization and Computer Graphics.

[21]  Andruid Kerne,et al.  The information discovery framework , 2004, DIS '04.

[22]  Steven K. Feiner,et al.  Visual task characterization for automated visual discourse synthesis , 1998, CHI.

[23]  Ed H. Chi,et al.  Using information scent to model user information needs and actions and the Web , 2001, CHI.

[24]  Denis Lalanne,et al.  A Qualitative Study on the Exploration of Temporal Changes in Flow Maps with Animation and Small‐Multiples , 2012, Comput. Graph. Forum.

[25]  William Ribarsky,et al.  Defining Insight for Visual Analytics , 2009, IEEE Computer Graphics and Applications.

[26]  Michael E. Papka,et al.  Evaluating user behavior and strategy during visual exploration , 2014, BELIV.

[27]  Thomas Grechenig,et al.  Evaluating the Effect of Style in Information Visualization , 2012, IEEE Transactions on Visualization and Computer Graphics.

[28]  David H. Laidlaw,et al.  An insight- and task-based methodology for evaluating spatiotemporal visual analytics , 2014, 2014 IEEE Conference on Visual Analytics Science and Technology (VAST).

[29]  Kristin A. Cook,et al.  Illuminating the Path: The Research and Development Agenda for Visual Analytics , 2005 .

[30]  Ben Shneiderman,et al.  Strategies for evaluating information visualization tools: multi-dimensional in-depth long-term case studies , 2006, BELIV '06.

[31]  Michael E. Atwood,et al.  Project Ernestine: Validating a GOMS Analysis for Predicting and Explaining Real-World Task Performance , 1993, Hum. Comput. Interact..

[32]  Ben Shneiderman,et al.  The eyes have it: a task by data type taxonomy for information visualizations , 1996, Proceedings 1996 IEEE Symposium on Visual Languages.