Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

[1]  Luis Miguel Bergasa,et al.  Text location in complex images , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[2]  Nicolas Pinto,et al.  Beyond simple features: A large-scale feature search approach to unconstrained face recognition , 2011, Face and Gesture 2011.

[3]  Joseph H. Goldberg,et al.  Eye tracking for visualization evaluation: Reading values on linear versus radial graphs , 2011, Inf. Vis..

[4]  Jonathan Tran,et al.  Using Eye Tracking Metrics and Visual Saliency Maps to Assess Image Utility , 2016, HVEI.

[5]  Gordon D. Logan,et al.  Automaticity and Reading: Perspectives from the Instance Theory of Automatization. , 1997 .

[6]  Jiri Matas,et al.  Robust wide-baseline stereo from maximally stable extremal regions , 2004, Image Vis. Comput..

[7]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[8]  Michael L. Mack,et al.  VISUAL SALIENCY DOES NOT ACCOUNT FOR EYE MOVEMENTS DURING VISUAL SEARCH IN REAL-WORLD SCENES , 2007 .

[9]  Jiřı́ Matas,et al.  Real-time scene text localization and recognition , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Joseph H. Goldberg,et al.  Comparing information graphics: a critical look at eye tracking , 2010, BELIV '10.

[11]  Stan Sclaroff,et al.  Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Colin M. Macleod Half a century of research on the Stroop effect: an integrative review. , 1991, Psychological bulletin.

[13]  Lars Linsen,et al.  Eye-tracking investigation during visual analysis of projected multidimensional data with 2D scatterplots , 2014, 2014 International Conference on Information Visualization Theory and Applications (IVAPP).

[14]  Gordon E. Legge,et al.  Psychophysics of reading—XVI. The visual span in normal and low vision , 1997, Vision Research.

[15]  Chris North,et al.  Information Visualization , 2008, Lecture Notes in Computer Science.

[16]  M. Sheelagh T. Carpendale,et al.  Empirical Studies in Information Visualization: Seven Scenarios , 2012, IEEE Transactions on Visualization and Computer Graphics.

[17]  Yu-Chi Lai,et al.  Evaluating 2D Flow Visualization Using Eye Tracking , 2015, Comput. Graph. Forum.

[18]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[19]  Hanspeter Pfister,et al.  Beyond Memorability: Visualization Recognition and Recall , 2016, IEEE Transactions on Visualization and Computer Graphics.

[20]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[21]  M. Sheelagh T. Carpendale,et al.  Evaluating Information Visualizations , 2008, Information Visualization.

[22]  Yiannis Aloimonos,et al.  Purposive and qualitative active vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[23]  S. Yantis,et al.  Visual Attention: Bottom-Up Versus Top-Down , 2004, Current Biology.

[24]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  Ali Borji,et al.  CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research , 2015, ArXiv.

[26]  Colin Ware,et al.  Information Visualization: Perception for Design , 2000 .

[27]  Frédo Durand,et al.  What Do Different Evaluation Metrics Tell Us About Saliency Models? , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Michael Dorr,et al.  Large-Scale Optimization of Hierarchical Features for Saliency Prediction in Natural Images , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[29]  Frédo Durand,et al.  A Benchmark of Computational Models of Saliency to Predict Human Fixations , 2012 .

[30]  R. Berns,et al.  Image color-appearance specification through extension of CIELAB , 1993 .

[31]  Michael Burch,et al.  Evaluating visual analytics with eye tracking , 2014, BELIV.

[32]  V. Lamme,et al.  Bottom-up and top-down attention are independent. , 2013, Journal of vision.

[33]  Kyle Wm. Hall,et al.  Formalizing Emphasis in Information Visualization , 2016, Comput. Graph. Forum.

[34]  Min Chen,et al.  Eurographics/ Ieee-vgtc Symposium on Visualization 2010 a Salience-based Quality Metric for Visualization , 2022 .

[35]  D. Kahneman,et al.  Tests of the automaticity of reading: dilution of Stroop effects by color-irrelevant stimuli. , 1983, Journal of experimental psychology. Human perception and performance.

[36]  Janette Atkinson,et al.  The Developing Visual Brain , 2000 .

[37]  Michael J. Haass,et al.  Modeling Human Comprehension of Data Visualizations , 2017, HCI.

[38]  Tim K Marks,et al.  SUN: A Bayesian framework for saliency using natural statistics. , 2008, Journal of vision.

[39]  Hidehiko Komatsu,et al.  Target Selection in Area V4 during a Multidimensional Visual Search Task , 2004, The Journal of Neuroscience.

[40]  Huizhong Chen,et al.  Robust text detection in natural images with edge-enhanced Maximally Stable Extremal Regions , 2011, 2011 18th IEEE International Conference on Image Processing.

[41]  Huchuan Lu,et al.  Scene text detection via stroke width , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[42]  Tao Chen,et al.  Scene text extraction based on edges and support vector regression , 2015, International Journal on Document Analysis and Recognition (IJDAR).

[43]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[44]  Benjamin Strobel,et al.  Do graph readers prefer the graph type most suited to a given task? Insights from eye tracking , 2017 .

[45]  Michael J. Haass,et al.  Patterns of Attention: How Data Visualizations Are Read , 2017, HCI.