User-centric Visual Attention Estimation Based on Relationship between Image and Eye Gaze Data

This paper presents a method for estimating user-centric visual attention based on the relationship between image and eye gaze data. The proposed method focuses on relationship between visual features calculated from images and saliency values calculated from eye gaze data. Specifically, our method calculates the saliency map of each training image by using individual eye gaze data obtained from only these images. Furthermore, from the pairs of visual features and the gaze-based saliency, the estimation of user-centric saliency from a new test image becomes feasible. Our contribution is the construction of a simple but successful estimation model which can train the relationship from limited amount of individual eye gaze data. Experimental results show the effectiveness of the proposed method.

[1]  Andreas Bulling,et al.  Pervasive Attentive User Interfaces , 2016, Computer.

[2]  Soon Kwon,et al.  Driver's gaze zone estimation by transfer learning , 2018, 2018 IEEE International Conference on Consumer Electronics (ICCE).

[3]  Ioannis Patras,et al.  Action recognition using saliency learned from recorded human gaze , 2016, Image Vis. Comput..

[4]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[6]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[7]  Shabab Bazrafkan,et al.  Eye Gaze for Consumer Electronics: Controlling and commanding intelligent systems. , 2015, IEEE Consumer Electronics Magazine.

[8]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[9]  Matthieu Cord,et al.  Gaze latent support vector machine for image classification , 2016, ICIP.