Contribution of Color Information in Visual Saliency Model for Videos

Much research has been concerned with the contribution of the low level features of a visual scene to the deployment of visual attention. Bottom-up saliency models have been developed to predict the location of gaze according to these features. So far, color besides to brightness, contrast and motion is considered as one of the primary features in computing bottom-up saliency. However, its contribution in guiding eye movements when viewing natural scenes has been debated. We investigated the contribution of color information in a bottom-up visual saliency model. The model efficiency was tested using the experimental data obtained on 45 observers who were eye tracked while freely exploring a large data set of color and grayscale videos. The two datasets of recorded eye positions, for grayscale and color videos, were compared with a luminance-based saliency model [1]. We incorporated chrominance information to the model. Results show that color information improves the performance of the saliency model in predicting eye positions.

[1]  U. Leonards,et al.  What makes cast shadows hard to see? , 2010, Journal of vision.

[2]  Nathalie Guyader,et al.  Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos , 2009, International Journal of Computer Vision.

[3]  K. Gegenfurtner,et al.  Cortical mechanisms of colour vision , 2003, Nature Reviews Neuroscience.

[4]  Simone Frintrop,et al.  VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search , 2006, Lecture Notes in Computer Science.

[5]  Alain Trémeau,et al.  Images Couleur : de l'acquisition au traitement , 2004 .

[6]  D J Field,et al.  Relations between the statistics of natural images and the response properties of cortical cells. , 1987, Journal of the Optical Society of America. A, Optics and image science.

[7]  G. Buchsbaum,et al.  Trichromacy, opponent colours coding and optimum colour information transmission in the retina , 1983, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[8]  Antoine Coutrot,et al.  Influence of soundtrack on eye movements during video exploration , 2012 .

[9]  A. Rahman,et al.  Influence of number, location and size of faces on gaze in video , 2014 .

[10]  John M. Henderson,et al.  Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion , 2011, Cognitive Computation.

[11]  Joseph H. Goldberg,et al.  Identifying fixations and saccades in eye-tracking protocols , 2000, ETRA.

[12]  Jan Theeuwes,et al.  Attentional and oculomotor inhibition , 2010 .

[13]  K. Mullen,et al.  Orientation selectivity in luminance and color vision assessed using 2-d band-pass filtered spatial noise , 2005, Vision Research.

[14]  Andreas Bulling,et al.  Introduction to the PETMEI special issue , 2014 .

[15]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[16]  S. Yantis,et al.  Visual Attention: Bottom-Up Versus Top-Down , 2004, Current Biology.

[17]  Pierre Baldi,et al.  Bayesian surprise attracts human attention , 2005, Vision Research.

[18]  Nathalie Guyader,et al.  Improving Visual Saliency by Adding ‘Face Feature Map’ and ‘Center Bias’ , 2012, Cognitive Computation.

[19]  D. W. Heeley,et al.  Cardinal directions of color space , 1982, Vision Research.

[20]  Christian Wallraven,et al.  Serial exploration of faces: comparing vision and touch. , 2012, Journal of vision.

[21]  L. Itti Author address: , 1999 .

[22]  Nathalie Guyader,et al.  Parallel implementation of a spatio-temporal visual saliency model , 2010, Journal of Real-Time Image Processing.

[23]  Thierry Baccino,et al.  New insights into ambient and focal visual fixations using an automatic classification algorithm , 2011, i-Perception.

[24]  Linda Jeffery,et al.  Race-specific norms for coding face identity and a functional role for norms. , 2010, Journal of vision.

[25]  Nathalie Guyader,et al.  A Functional and Statistical Bottom-Up Saliency Model to Reveal the Relative Contributions of Low-Level Visual Guiding Factors , 2010, Cognitive Computation.

[26]  Kathy T. Mullen,et al.  Orientation selectivity in luminance and color vision assessed using 2-d bandpass filtered spatial noise , 2010 .

[27]  G. Rousselet,et al.  Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes. , 2003, Journal of vision.

[28]  Peter König,et al.  What's color got to do with it? The influence of color on visual attention in different categories. , 2008, Journal of vision.

[29]  Douglas DeCarlo,et al.  Robust clustering of eye movement recordings for quantification of visual interest , 2004, ETRA.

[30]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[31]  Roland J. Baddeley,et al.  High frequency edges (but not contrast) predict where we fixate: A Bayesian system identification analysis , 2006, Vision Research.

[32]  O. Meur,et al.  Predicting visual fixations on video based on low-level visual features , 2007, Vision Research.

[33]  Thomas Martinetz,et al.  Variability of eye movements when viewing dynamic natural scenes. , 2010, Journal of vision.

[34]  T. Foulsham,et al.  Comparing scanpaths during scene encoding and recognition : A multi-dimensional approach , 2012 .

[35]  Nathalie Guyader,et al.  When viewing natural scenes, do abnormal colors impact on spatial or temporal parameters of eye movements? , 2012, Journal of vision.