A real-time visual attention model for predicting gaze point during first-person exploration of virtual environments

This paper introduces a novel visual attention model to compute user's gaze position automatically, i.e. without using a gaze-tracking system. Our model is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We have conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in realtime a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches.

[1]  L. Itti Author address: , 1999 .

[2]  L. Itti,et al.  Modeling the influence of task on attention , 2005, Vision Research.

[3]  A. Robertson Historical development of CIE recommended color difference equations , 1990 .

[4]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[5]  Sungkil Lee,et al.  Real-Time Tracking of Visually Attended Objects in Virtual Environments and Its Application to LOD , 2009, IEEE Transactions on Visualization and Computer Graphics.

[6]  Anatole Lécuyer,et al.  Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments , 2008, 2008 IEEE Virtual Reality Conference.

[7]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[8]  Alan Chalmers,et al.  Detail to Attention: Exploiting Visual Tasks for Selective Rendering , 2003, Rendering Techniques.

[9]  Anatole Lécuyer,et al.  Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments , 2008, IEEE Computer Graphics and Applications.

[10]  Hao Chen,et al.  Lighting and material of Halo 3 , 2008, SIGGRAPH '08.

[11]  Anatole Lécuyer,et al.  Gaze behavior and visual attention model when turning in virtual environments , 2009, VRST '09.

[12]  Kurt Debattista,et al.  A GPU based saliency map for high-fidelity selective rendering , 2006, AFRIGRAPH '06.

[13]  Anatole Lécuyer,et al.  Depth-of-Field Blur Effects for First-Person Navigation in Virtual Environments , 2007, IEEE Computer Graphics and Applications.

[14]  Arne John Glenstrup,et al.  Eye Controlled Media: Present and Future State , 1995 .

[15]  D. Robinson The mechanics of human smooth pursuit eye movement. , 1965, The Journal of physiology.

[16]  Erik Reinhard,et al.  A psychophysical study of fixation behavior in a computer game , 2008, APGV '08.

[17]  David P. Luebke,et al.  Perceptually-Driven Simplification for Interactive Rendering , 2001, Rendering Techniques.

[18]  Z. Pylyshyn,et al.  Multiple object tracking and attentional processing. , 2000, Canadian journal of experimental psychology = Revue canadienne de psychologie experimentale.