Driver Gaze Region Estimation without Use of Eye Movement

Automated estimation of the allocation of a driver's visual attention could be a critical component of future advanced driver assistance systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. But in practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects but can't provide as fine-grained of a resolution in localizing the gaze. For the purpose of keeping the driver safe, it's sufficient to partition gaze into regions. In this effort, a proposed system extracts facial features and classifies their spatial configuration into six regions in real time. The proposed method achieves an average accuracy of 91.4 percent at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.

[1]  Miad Faezipour,et al.  Eye Tracking and Head Movement Detection: A State-of-Art Survey , 2013, IEEE Journal of Translational Engineering in Health and Medicine.

[2]  Mauricio Muñoz,et al.  Analysis of Drivers' Head and Eye Movement Correspondence: Predicting Drivers' Glance Location Using Head Rotation Data , 2017 .

[3]  Josephine Sullivan,et al.  One millisecond face alignment with an ensemble of regression trees , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Mohan M. Trivedi,et al.  Head Pose Estimation in Computer Vision: A Survey , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Neil A. Dodgson,et al.  Robust real-time pupil tracking in highly off-axis images , 2012, ETRA.

[6]  Stefanos Zafeiriou,et al.  300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge , 2013, 2013 IEEE International Conference on Computer Vision Workshops.

[7]  Junichi Odagiri,et al.  Pupil detection in the presence of specular reflection , 2014, ETRA.

[8]  Bryan Reimer,et al.  Monitoring, managing, and motivating driver safety and well-being , 2011, IEEE Pervasive Computing.

[9]  Thomas A. Dingus,et al.  The Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data , 2006 .

[10]  Davis E. King,et al.  Dlib-ml: A Machine Learning Toolkit , 2009, J. Mach. Learn. Res..

[11]  F CoughlinJoseph,et al.  Monitoring, Managing, and Motivating Driver Safety and Well-Being , 2011 .

[12]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[13]  Volkan Atalay,et al.  Delaunay Triangulation based 3D Human Face Modeling from Uncalibrated Images , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[14]  David G. Kidd,et al.  Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems , 2015, Ergonomics.

[15]  Louis Tijerina,et al.  Eye Glance and Head Turn Correspondence during Secondary Task Performance in Simulator Driving , 2013 .