With the explosion of Virtual Reality technologies, the production and usage of omni directional images (a.k.a 360 images) is presenting new challenges in the domains of compression, transmission and rendering. The evaluation of the quality of images generated by these technologies is therefore paramount. As the exploration of 360 images within a Head-Mounted Display (HMD) is non-uniform, current state of the art proposes a saliency weighting of distortions (between a reference and an impaired version) thus allowing us to highlight impairments in frequently attended regions. So far, saliency maps have been generated by tracking head motion alone, and consider that the view-port orientation alone is sufficient to determine saliency. The added value of eye gaze tracking within the viewport has not been studied in this domain yet. In this work, an eye-tracking experiment is performed using a HMD and is followed by subsequent gaze analysis to appreciate the visual attention behavior within a view-port. Results suggest that most eye-gaze fixations are rather far away from the center of the viewport. Across contents and observers, gaze fixations are quasi-isotropically distributed in orientation. The average location of gaze fixation (across contents and observers) from the center of the view-port varies between 14 and 20 visual degrees and these values correspond to a shift in retina beyond the para and peri fovea, into the extra-perifoveal region. A saliency weighting model based on foveation, centered at the middle of the view-port seems to be a correct assumption in only 2.5% of the overall scenarios and is consequently questionable. Therefore there is a need to refine saliency modeling and weighting for quality assessment in case of panoramic viewing.
[1]
M. Iwasaki,et al.
Relation between superficial capillaries and foveal structures in the human retina.
,
1986,
Investigative ophthalmology & visual science.
[2]
Thierry Baccino,et al.
Methods for comparing scanpaths and saliency maps: strengths and weaknesses
,
2012,
Behavior Research Methods.
[3]
J. González-Méijome,et al.
Ocular Dominance and Visual Function Testing
,
2013,
BioMed research international.
[4]
A. Hendrickson,et al.
Human photoreceptor topography
,
1990,
The Journal of comparative neurology.
[5]
Joseph H. Goldberg,et al.
Identifying fixations and saccades in eye-tracking protocols
,
2000,
ETRA.
[6]
Touradj Ebrahimi,et al.
Testbed for subjective evaluation of omnidirectional visual content
,
2016,
2016 Picture Coding Symposium (PCS).
[7]
Bernd Girod,et al.
A Framework to Evaluate Omnidirectional Video Coding Schemes
,
2015,
2015 IEEE International Symposium on Mixed and Augmented Reality.
[8]
Gerd Marmitt,et al.
Modeling Visual Attention in VR: Measuring the Accuracy of Predicted Scanpaths
,
2002,
Eurographics.
[9]
Patrick Le Callet,et al.
Quantifying the relation between perceived interest and visual salience during free viewing using trellis based optimization
,
2016,
2016 IEEE 12th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP).
[10]
Thierry Baccino,et al.
New insights into ambient and focal visual fixations using an automatic classification algorithm
,
2011,
i-Perception.