Modeling realistic facial object from laser scanner dataand photographic images for malaysian craniofacialdatabase

This paper presents a technique to model a person's face from a set of digital photographs corresponding to different views of the 3D facial mesh from laser scanner. Modeling approach is based on photogrammetric techniques in which images are used to create precise geometry and texture information. Faces are modeled by interactively fitting a 3D facial mesh to a set of images. The fitting process consists of several basic steps. Firstly, multiple views of a human face are captured using cameras at some fixed locations. Next, a set of initial corresponding points are marked on the photographs and the face in the different views manually (typically, corners of the eyes and mouth, tip of the nose, etc.). These points are then used to automatically recover the camera parameters (position, focal length, etc.) corresponding to each photograph, as well as the 3D positions of the marked points in space. The 3D positions are then used to adjust 3D face mesh. Finally, one or more texture maps are extracted for the 3D face mesh from the photos. Either a single view-independent texture map can be extracted, or the original images can be used to perform view-dependent texture mapping.