Texturing Calibrated Head Model from Images

In this paper we address a well-known problem of producing an animated model of a human head from a pair of orthogonal photographs and present a new technique of generating consistent high-quality texture for a polygonal 3D mesh which represents a personalized 3D head. The described technique is based on a combination of two different approaches to this problem and significantly extends them in order to produce optimal image in terms of minimizing visual artifacts and keeping level of details as high as possible. After a brief introduction to the complete calibration pipeline that we have used, we describe the stages of texture generation procedure, which consists of indexing available photographs with texture coordinates, finding optimal merging lines and balancing visual differences to allow for seamless merging while preserving high-frequency details. The presented technique has been evaluated within head animation software and demonstrated its ability of yielding high-quality textured model from photographs.