A method of detecting and tracking irises and eyelids in video

Abstract We locate the eye corners, eyelids, and irises in every frame of an image sequence, and analyze the movements of the irises and eyelids to determine changes in gaze direction and blinking, respectively. Using simple models for the motions of the head and eyes, we determine the head-independent motions of the irises and eyelids by stabilizing for the head motion. The head-independent motions of the irises can be used to determine behaviors like saccades and smooth pursuit. Tracking the upper eyelid and using the distance between its apex and the center of the iris, we detect instances of eye closure during blinking. In experiments on two short image sequences, in one of which the subject was wearing glasses, we successfully located the irises in every frame in which the eyes were fully or partially open, and successfully located the eyelids 80% of the time. When motion information in the form of normal flow was used, the irises were successfully tracked in every frame in which the eyes were fully or partially open, and the eyelids were successfully located and tracked 90% of the time.

[1]  Dimitris N. Metaxas,et al.  Combining information using hard constraints , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[2]  Thomas S. Huang,et al.  Explanation-based facial motion tracking using a piecewise Bezier volume deformation model , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[3]  Feipei Lai,et al.  Region-based template deformation and masking for eye-feature extraction and description , 1997, Pattern Recognit..

[4]  Hanqi Zhuang,et al.  A cascaded scheme for eye tracking and head movement compensation , 1998, IEEE Trans. Syst. Man Cybern. Part A.

[5]  Larry S. Davis,et al.  Recognizing Human Facial Expressions From Long Image Sequences Using Optical Flow , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Larry S. Davis,et al.  Computing 3-D head orientation from a monocular image sequence , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[7]  Azriel Rosenfeld,et al.  Eye detection in a face image using linear and nonlinear filters , 2001, Pattern Recognit..

[8]  Alan L. Yuille,et al.  Feature extraction from faces using deformable templates , 2004, International Journal of Computer Vision.

[9]  Dimitris N. Metaxas,et al.  The integration of optical flow and deformable models with applications to human face shape and motion estimation , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[10]  Carlos Hitoshi Morimoto,et al.  Pupil detection and tracking using multiple light sources , 2000, Image Vis. Comput..

[11]  Hong Yan,et al.  An Improved Method for Locating and Extracting the Eye in Human Face Images , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[12]  Larry S. Davis,et al.  Computing 3-D head orientation from a monocular image sequence , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[13]  Azriel Rosenfeld,et al.  Image Sequence Stabilization in Real Time , 1996, Real Time Imaging.

[14]  Michael J. Black,et al.  Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion , 1997, International Journal of Computer Vision.

[15]  Hanqi Zhuang,et al.  Real-time eye feature tracking from a video image sequence using Kalman filter , 1994, Conference Record Southcon.

[16]  John Porrill,et al.  A deformable model of the human iris for measuring small three-dimensional eye movements , 1998, Machine Vision and Applications.