Computing 3-D head orientation from a monocular image sequence

An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub-pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the lip of the nose). The authors describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

[1]  Rama Chellappa,et al.  Human and machine recognition of faces: a survey , 1995, Proc. IEEE.

[2]  C. Cacou Anthropometry of the head and face , 1995 .

[3]  Joseph W. Young,et al.  Head and Face Anthropometry of Adult U.S. Civilians , 1993 .

[4]  Chil-Woo Lee,et al.  Detection and pose estimation of human face with synthesized image models , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[5]  L. Farkas Anthropometry of the head and face , 1994 .

[6]  Roberto Cipolla,et al.  Estimating gaze from a single view of a face , 1994, Proceedings of 12th International Conference on Pattern Recognition.