Depth based recovery of human facial features from video sequences

We propose a way to locate facial features from a video sequence captured by a camcorder undergoing strong translational motion. Pairs of stereo images containing frontal views of the human subject are sampled from the video sequence. A multiresolution hierarchical matching algorithm finds point correspondences over a large disparity range. The task of locating facial features such as the eyes, nose and mouth is aided by depth information derived from the matching data. We present experimental results to verify our approach.