Synthesis of virtual views of a scene from two OR three real views

This paper presents two methods for synthesizing arbitrary views of a scene from two or three real views. Both methods rely on the extraction of a dense and accurate matching field between the images. The matching field is obtained from an optical flow computation technique based on dynamic programming. First method performs simple image interpolation with motion compensation in order to synthesize intermediate views. Transformations of the matching field allows to synthesize other views that are not simply intermediate ones. Second method is based on the reconstruction of a textured 3D surface. Both methods are demonstrated with two- and three-image sets.