Color information in a model of saliency

Bottom-up saliency models have been developed to predict the location of gaze according to the low level features of visual scenes, such as intensity, color, frequency and motion. We investigate in this paper the contribution of color features in computing the bottom-up saliency. We incorporated a chrominance pathway to a luminance-based model (Marat et al. [1]). We evaluated the performance of the model with and without chrominance pathway. We added an efficient multi-GPU implementation of the chrominance pathway to the parallel implementation of the luminance-based model proposed by Rahman et al. [2], preserving real time solution. Results show that color information improves the performance of the saliency model in predicting eye positions.

[1]  O. Meur,et al.  Predicting visual fixations on video based on low-level visual features , 2007, Vision Research.

[2]  Asha Iyer,et al.  Components of bottom-up gaze allocation in natural images , 2005, Vision Research.

[3]  K. Gegenfurtner,et al.  Cortical mechanisms of colour vision , 2003, Nature Reviews Neuroscience.

[4]  Nathalie Guyader,et al.  Parallel implementation of a spatio-temporal visual saliency model , 2010, Journal of Real-Time Image Processing.

[5]  Simone Frintrop,et al.  VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search , 2006, Lecture Notes in Computer Science.

[6]  Nathalie Guyader,et al.  When viewing natural scenes, do abnormal colors impact on spatial or temporal parameters of eye movements? , 2012, Journal of vision.

[7]  Roland J. Baddeley,et al.  High frequency edges (but not contrast) predict where we fixate: A Bayesian system identification analysis , 2006, Vision Research.

[8]  L. Itti,et al.  Visual causes versus correlates of attentional selection in dynamic scenes , 2006, Vision Research.

[9]  Laurent Bedat Aspects psychovisuels de la perception des couleurs. Application au codage d'images couleur fixe avec compression de l'information , 1998 .

[10]  Nathalie Guyader,et al.  Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos , 2009, International Journal of Computer Vision.

[11]  Thomas Martinetz,et al.  Variability of eye movements when viewing dynamic natural scenes. , 2010, Journal of vision.

[12]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[13]  Peter König,et al.  What's color got to do with it? The influence of color on visual attention in different categories. , 2008, Journal of vision.

[14]  Susan L. Franzel,et al.  Guided search: an alternative to the feature integration model for visual search. , 1989, Journal of experimental psychology. Human perception and performance.

[15]  Nathalie Guyader,et al.  Contribution of Color Information in Visual Saliency Model for Videos , 2014, ICISP.

[16]  D. W. Heeley,et al.  Cardinal directions of color space , 1982, Vision Research.

[17]  L. Itti Author address: , 1999 .

[18]  Nathalie Guyader,et al.  A Functional and Statistical Bottom-Up Saliency Model to Reveal the Relative Contributions of Low-Level Visual Guiding Factors , 2010, Cognitive Computation.

[19]  Kathy T. Mullen,et al.  Orientation selectivity in luminance and color vision assessed using 2-d bandpass filtered spatial noise , 2010 .

[20]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[21]  A. Rahman,et al.  Influence of number, location and size of faces on gaze in video , 2014 .

[22]  D J Field,et al.  Relations between the statistics of natural images and the response properties of cortical cells. , 1987, Journal of the Optical Society of America. A, Optics and image science.

[23]  K. Mullen,et al.  Orientation selectivity in luminance and color vision assessed using 2-d band-pass filtered spatial noise , 2005, Vision Research.

[24]  S. Yantis,et al.  Visual Attention: Bottom-Up Versus Top-Down , 2004, Current Biology.