A statistical mixture method to reveal bottom-up and top-down factors guiding the eye-movements

When people gaze at real scenes, their visual attention is driven both by a set of bottom-up processes coming from the signal properties of the scene and also from top-down effects such as the task, the affective state, prior knowledge, or the semantic context. The context of this study is an assessment of manufactured objects (here car cab interior). From this dedicated context, this work describes a set of methods to analyze the eye-movements during the visual scene evaluation. But these methods can be adapted to more general contexts. We define a statistical model to explain the eye fixations measured experimentally by eye-tracking even when the ratio signal/noise is bad or lacking of raw data. One of the novelties of the approach is to use complementary experimental data obtained with the “Bubbles” paradigm. The proposed model is an additive mixture of several a priori spatial density distributions of factors guiding visual attention. The “Bubbles” paradigm is adapted here to reveal the semantic density distribution which represents here the cumulative effects of the top-down factors. Then, the contribution of each factor is compared depending on the product and on the task, in order to highlight the properties of the visual attention and the cognitive activity in each situation.

[1]  Nathalie Guyader,et al.  A Computational Saliency Model Integrating Saccade Programming , 2009, BIOSIGNALS.

[2]  Peter De Graef Chapter 14 – Prefixational Object Perception in Scenes: Objects Popping Out of Schemas , 1998 .

[3]  Philippe G Schyns,et al.  Using "Bubbles" with babies: a new technique for investigating the informational basis of infant perception. , 2006, Infant behavior & development.

[4]  Jitendra Malik,et al.  An Information Maximization Model of Eye Movements , 2004, NIPS.

[5]  Yizong Cheng,et al.  Mean Shift, Mode Seeking, and Clustering , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Henry Peter Smith,et al.  Psychology in teaching , 1954 .

[7]  Selina Sharmin Studies of Human Perception on Design Products. , 2004 .

[8]  A. L. Yarbus Eye Movements During Perception of Complex Objects , 1967 .

[9]  Perception des scènes naturelles : étude et simulation du rôle de l'amplitude, de la phase et de la saillance dans la catégorisation et l'exploration de scènes naturelles , 2003 .

[10]  Frédéric Gosselin,et al.  Bubbles: a technique to reveal the use of information in recognition tasks , 2001, Vision Research.

[11]  R. Baddeley,et al.  The long and the short of it: Spatial statistics at fixation vary with saccade amplitude and task , 2006, Vision Research.

[12]  John K. Tsotsos,et al.  Neurobiology of Attention , 2005 .

[13]  Larry D. Hostetler,et al.  The estimation of the gradient of a density function, with applications in pattern recognition , 1975, IEEE Trans. Inf. Theory.

[14]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[15]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[16]  K. Rayner Eye movements in reading and information processing: 20 years of research. , 1998, Psychological bulletin.

[17]  R. Baddeley,et al.  Do we look at lights? Using mixture modelling to distinguish between low- and high-level factors in natural image viewing , 2009 .

[18]  G. Schwarz Estimating the Dimension of a Model , 1978 .

[19]  M. Ediger The Psychology of Teaching Reading. , 1999 .

[20]  Ashutosh Kumar Singh,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2010 .

[21]  Benjamin W Tatler,et al.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. , 2007, Journal of vision.

[22]  Erik D. Reichle,et al.  The E-Z Reader model of eye-movement control in reading: Comparisons to other models , 2003, Behavioral and Brain Sciences.