Photometric Modeling for Mixed Reality

The mixed reality technology is considered as one of the key technologies for enhancing a wide variety of applications ranging from manufacturing to entertainment. The mixed reality technology di ers from the virtual reality technology in that users can feel immersed in a space which consists of not only computer-generated objects but also real objects. Thus seamless integration of the virtual and real worlds is essential for mixed reality in addition to reality of the virtual world. Our e orts in the mixed reality research span two aspects: how to create models of virtual objects and how to integrate such virtual objects with real scenes. For model creation, we have developed two methods, the model-based rendering method and the eigen-texture method, both of which automatically create such rendering models by observing real objects. The model-based rendering method rst analyzes input images of real objects, obtains re ectance parameters from this analysis, and then, using the determined re ectance parameters, generates the virtual image [1]. This method stores very compact information, namely surface shapes and re ectance

[1]  Ronald Azuma,et al.  A Survey of Augmented Reality , 1997, Presence: Teleoperators & Virtual Environments.

[2]  Katsushi Ikeuchi,et al.  Eigen-texture method: Appearance compression based on 3D model , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[3]  K. Torrance,et al.  Theory for off-specular reflection from roughened surfaces , 1967 .

[4]  Takeo Kanade,et al.  Surface Reflection: Physical and Geometrical Perspectives , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Kosuke Sato,et al.  Determining Reflectance Properties of an Object Using Range and Brightness Images , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Katsushi Ikeuchi,et al.  Acquiring a Radiance Distribution to Superimpose Virtual Objects onto Real Scene , 2001, MVA.

[7]  Marc Levoy,et al.  A volumetric method for building complex models from range images , 1996, SIGGRAPH.

[8]  Michael Bajura,et al.  Merging Virtual Objects with the Real World , 1992 .

[9]  Roger Y. Tsai,et al.  A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses , 1987, IEEE J. Robotics Autom..

[10]  Tony DeRose,et al.  Surface reconstruction from unorganized points , 1992, SIGGRAPH.

[11]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[12]  Katsushi Ikeuchi,et al.  Temporal-color space analysis of reflection , 1994 .

[13]  T. Caelli,et al.  Inverting an illumination model from range and intensity maps , 1994 .

[14]  Zhengyou Zhang,et al.  Modeling geometric structure and illumination variation of a scene from real images , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[15]  Katsushi Ikeuchi,et al.  Object shape and reflectance modeling from observation , 1997, SIGGRAPH.

[16]  Hiroshi Murase,et al.  Visual learning and recognition of 3-d objects from appearance , 2005, International Journal of Computer Vision.

[17]  K. Sato,et al.  Range imaging system utilizing nematic liquid crystal mask , 1987 .

[18]  Ryutarou Ohbuchi,et al.  Merging virtual objects with the real world: seeing ultrasound imagery within the patient , 1992, SIGGRAPH.

[19]  Mark A. Livingston,et al.  Superior augmented reality registration by integrating landmark tracking and magnetic tracking , 1996, SIGGRAPH.

[20]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[21]  Katsushi Ikeuchi,et al.  Consensus surfaces for modeling 3D objects from multiple range images , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[22]  A. Fournier,et al.  Common Illumination between Real and Computer Generated Scenes , 1992 .