X-Eye: A reference format for eye tracking data to facilitate analyses across databases

Datasets of images annotated with eye tracking data constitute important ground truth for the development of saliency models, which have applications in many areas of electronic imaging. While comparisons and reviews of saliency models abound, similar comparisons among the eye tracking databases themselves are rare. In an earlier paper, we reviewed the content and purpose of over two dozen databases available in the public domain and discussed their commonalities and differences. A major issue is that the formats of the various datasets vary a lot owing to the nature of tools used for eye movement recordings, and often specialized code is required to use the data for further analysis. In this paper, we therefore propose a common reference format for eye tracking data, together with conversion routines for 16 existing image eye tracking databases to that format. Furthermore, we conduct a few analyses on these datasets as examples of what X-Eye facilitates.

[1]  Martin D. Levine,et al.  Visual Saliency Based on Scale-Space Analysis in the Frequency Domain , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Harish Katti,et al.  An Eye Fixation Database for Saliency Detection in Images , 2010, ECCV.

[3]  Patrick Le Callet,et al.  A coherent computational approach to model bottom-up visual attention , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[5]  Krista A. Ehinger,et al.  SUN database: Large-scale scene recognition from abbey to zoo , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  Stefan Winkler,et al.  Overview of Eye tracking Datasets , 2013, 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX).

[7]  Matei Mancas,et al.  Memorability of natural scenes: The role of attention , 2013, 2013 IEEE International Conference on Image Processing.

[8]  Ali Borji,et al.  Analysis of Scores, Datasets, and Models in Visual Saliency Prediction , 2013, 2013 IEEE International Conference on Computer Vision.

[9]  Anthony J. Maeder,et al.  Visual attention modelling for subjective image quality databases , 2009, 2009 IEEE International Workshop on Multimedia Signal Processing.

[10]  Huchuan Lu,et al.  Saliency Detection via Graph-Based Manifold Ranking , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Jianxiong Xiao,et al.  What makes an image memorable , 2011 .

[13]  Junle Wang,et al.  Quantifying the relationship between visual salience and visual importance , 2010, Electronic Imaging.

[14]  Thomas Martinetz,et al.  Variability of eye movements when viewing dynamic natural scenes. , 2010, Journal of vision.

[15]  Christof Koch,et al.  Predicting human gaze using low-level saliency combined with face detection , 2007, NIPS.

[16]  A. Torralba,et al.  Fixations on low-resolution images. , 2010, Journal of vision.

[17]  Ali Borji,et al.  Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study , 2013, IEEE Transactions on Image Processing.

[18]  Krista A. Ehinger,et al.  Modelling search for people in 900 scenes: A combined source model of eye guidance , 2009 .

[19]  Ingrid Heynderickx,et al.  Comparative Study of Fixation Density Maps , 2013, IEEE Transactions on Image Processing.

[20]  Umesh Rajashekar,et al.  DOVES: a database of visual eye movements. , 2009, Spatial vision.

[21]  Gert Kootstra,et al.  Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry , 2011, Cognitive Computation.

[22]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.