Towards view-based pose-invariant face recognition under a single light source and the ambient light

In recent years, computerized human face recognition has been an active research area. Existing work has demonstrated acceptable recognition performance on frontal, expressionless views of faces with controlled lighting. One of the key remaining problems lies in how to recognize faces against pose variations. This paper proposes a new virtual view synthesis algorithm, which uses two gallery views (i.e., frontal and profile views) to synthesize virtual face views in arbitrary poses. It contains two stages: shape synthesis and texture synthesis. In the shape synthesis stage, 2D image operation is applied to establish point correspondence between the two views. Then, 3D face model can be learned. To analyze and synthesize virtual facial textures, we assume that the gallery views are taken under a single light source attached to the camera and the ambient light. Lambertian shading model and Phong shading model are introduced and applied to facial texture mapping. This texture synthesis algorithm is suitable for arbitrary lighting directions, which are also viewing directions. Virtual views in different poses are successfully synthesized using proposed virtual view synthesis algorithm for view-based pose-invariant face recognition.