This paper presents a technique to model a person's face from a set of digital photographs corresponding to different views of the 3D facial mesh from laser
scanner. Modeling approach is based on photogrammetric techniques in which images are used to create precise geometry and texture information. Faces are
modeled by interactively fitting a 3D facial mesh to a set of images. The fitting process consists of several basic steps. Firstly, multiple views of a human face are
captured using cameras at some fixed locations. Next, a set of initial corresponding points are marked on the photographs and the face in the different views manually
(typically, corners of the eyes and mouth, tip of the nose, etc.). These points are then used to automatically recover the camera parameters (position, focal length, etc.) corresponding to each photograph, as well as the 3D positions of the marked points in space. The 3D positions are then used to adjust 3D face mesh. Finally, one or
more texture maps are extracted for the 3D face mesh from the photos. Either a single view-independent texture map can be extracted, or the original images can be used to perform view-dependent texture mapping.
[1]
David Salesin,et al.
Modeling and Animating Realistic Faces from Images
,
2002,
International Journal of Computer Vision.
[2]
K. Atkinson.
Close Range Photogrammetry and Machine Vision
,
1996
.
[3]
Edmond C. Prakash,et al.
Texture Mapping of 3 D Human Face for Virtual Reality Environments
,
2002
.
[4]
Jitendra Malik,et al.
Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach
,
1996,
SIGGRAPH.
[5]
Fabio Remondino.
3D reconstruction of articulated objects from uncalibrated images
,
2002,
IS&T/SPIE Electronic Imaging.
[6]
Clive S. Fraser,et al.
Digital camera self-calibration
,
1997
.