MAPPING TEXTURE FROM MULTIPLE CAMERA VIEWS ONTO 3D-OBJECT MODELS FOR COMPUTER ANIMATION

An algorithm for the mapping of texture from multiple camera views onto a 3D model of a real object is presented. The texture sources are images taken from an object rotating in front of a stationary calibrated camera. The 3D model is repre sented by a wireframe built of triangles and is geometrically adjusted to the camera views. The presented approach aims at the reduction of texture distortion associated with the boundaries between triangles mapped from dif ferent camera views and disturbance due to not visible parts of the object surface. For this purpose adjacent triangles describing the surface of the 3D model are grouped to homogenous surface regions which are textured with a common image followed by a local texture filtering at the region boundaries. For triangles not visible in any camera view a filter has been developed which uses the texture from adjacent visible triangles for the generation of synthetic texture. Experimental investigations with dif ferent real 3D objects have confirmed the suitability of the proposed technique for computer animation applications.