VR object composition method using stereo vision

In this paper, we propose a VR object composition method using stereo vision. In this method, a VR object is composed of 3D model and desired texture image. This method is organized by a 3D model reconstruction and texture segmentation. Firstly, stereo images and texture image are captured from a real object. Secondly, a 3D model is reconstructed by stereo images. Thirdly, segmentation is applied to a texture image based on color cluster. Finally, 3D model is displayed with segmented texture images. The 3D model can exchange a piece of texture image for a desired image. Using this method, the user can simulate replacing the appearance of real object such as renovation of his/her room and wearing clothes.

[1]  Harry Shum,et al.  Stereo reconstruction from multiperspective panoramas , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Szymon Rusinkiewicz,et al.  Spacetime stereo: a unifying framework for depth from triangulation , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.