Discriminative Robust Gaze Estimation Using Kernel-DMCCA Fusion

The proposed framework employs discriminative analysis for gaze estimation using kernel discriminative multiple canonical correlation analysis (K-DMCCA), which represents different feature vectors that account for variations of head pose, illumination and occlusion. The feature extraction component of the framework includes spatial indexing, statistical and geometrical elements. Gaze estimation is constructed by feature aggregation and transforming features into a higher dimensional space using the RBF kernel ���� and spread factor. The output of fused features through K-DMCCA is robust to illumination, occlusion and is calibration free. Our algorithm is validated on MPII, CAVE, ACS and EYEDIAP datasets. The two main contributions of the framework are the following: Enhancing the performance of DMCCA with the kernel and introducing quadtree as an iris region descriptor. Spatial indexing using quadtree is a robust method for detecting which quadrant the iris is situated, detecting the iris boundary and it is inclusive of statistical and geometrical indexing that are calibration free. Our method achieved an accurate gaze estimation of 4.8º using Cave, 4.6° using MPII, 5.1º using ACS and 5.9° using EYEDIAP datasets respectively. The proposed framework provides insight into the methodology of multi-feature fusion for gaze estimation.

[1]  Steven K. Feiner,et al.  Gaze locking: passive eye contact detection for human-object interaction , 2013, UIST.

[2]  Narendra Ahuja,et al.  Appearance-based eye gaze estimation , 2002, Sixth IEEE Workshop on Applications of Computer Vision, 2002. (WACV 2002). Proceedings..

[3]  Song Wang,et al.  Evaluating Edge Detection through Boundary Detection , 2006, EURASIP J. Adv. Signal Process..

[4]  Shumeet Baluja,et al.  Non-Intrusive Gaze Tracking Using Artificial Neural Networks , 1993, NIPS.

[5]  Jean-Marc Odobez,et al.  Gaze Estimation in the 3D Space Using RGB-D Sensors , 2015, International Journal of Computer Vision.

[6]  Takahiro Okabe,et al.  Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis , 2015, IEEE Transactions on Image Processing.

[7]  Hanan Samet,et al.  On Encoding Boundaries with Quadtrees , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Daniel Herrera C,et al.  Joint depth and color camera calibration with distortion correction. , 2012, IEEE transactions on pattern analysis and machine intelligence.

[9]  Peter Corcoran,et al.  A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms , 2017, IEEE Access.

[10]  Nicu Sebe,et al.  Combining Head Pose and Eye Location Information for Gaze Estimation , 2012, IEEE Transactions on Image Processing.

[11]  Mario Fritz,et al.  It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[12]  Ling Guan,et al.  Pupil localization for gaze estimation using unsupervised graph-based model , 2017, 2017 IEEE International Symposium on Circuits and Systems (ISCAS).

[13]  O AdeosunO.,et al.  Performance Evaluation of Quadtree & Hough Transform Segmentation Techniques for Iris recognition using Artificial Neural Network (Ann) , 2015 .

[14]  Wojciech Matusik,et al.  Eye Tracking for Everyone , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Qiang Ji,et al.  3D gaze estimation with a single camera without IR illumination , 2008, 2008 19th International Conference on Pattern Recognition.

[16]  Takahiro Okabe,et al.  Adaptive Linear Regression for Appearance-Based Gaze Estimation , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Qiang Ji,et al.  In the Eye of the Beholder: A Survey of Models for Eyes and Gaze , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Takahiro Okabe,et al.  A Head Pose-free Approach for Appearance-based Gaze Estimation , 2011, BMVC.

[19]  Jean-Marc Odobez,et al.  EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras , 2014, ETRA.

[20]  Ce Liu,et al.  Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[21]  Lei Gao,et al.  Discriminative Multiple Canonical Correlation Analysis for Multi-feature Information Fusion , 2012, 2012 IEEE International Symposium on Multimedia.

[22]  Andreas Bulling,et al.  EyeTab: model-based gaze estimation on unmodified tablet computers , 2014, ETRA.

[23]  Yusuke Sugano,et al.  3D gaze estimation from 2D pupil positions on monocular head-mounted eye trackers , 2016, ETRA.

[24]  Mario Fritz,et al.  Appearance-based gaze estimation in the wild , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Andrew Blake,et al.  Sparse and Semi-supervised Visual Mapping with the S^3GP , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[26]  Yoichi Sato,et al.  Appearance-Based Gaze Estimation With Online Calibration From Mouse Operations , 2015, IEEE Transactions on Human-Machine Systems.

[27]  R. Romero,et al.  A Linear-RBF Multikernel SVM to Classify Big Text Corpora , 2015, BioMed research international.

[28]  Vince D. Calhoun,et al.  Multi-set canonical correlation analysis for the fusion of concurrent single trial ERP and functional MRI , 2010, NeuroImage.