An improved augmented-reality method of inserting virtual objects into the scene with transparent objects

In augmented reality, the insertion of virtual objects into the real scene needs to meet the requirements of visual consistency. The virtual objects rendered by the augmented reality system should be consistent with the illumination of the real scene. However, for complex scenes, it is not enough to just complete the illumination estimation. When there are transparent objects in the real scene, the difference in refractive index and roughness of transparent objects will influence the effect of the virtual and real fusion. To tackle this problem, this paper proposes a new approach to jointly estimate the illumination and transparent material for inserting virtual objects into the real scene. We solve for the material parameters of objects and illumination simultaneously by nesting microfacet model and hemispherical area illumination model into inverse path tracing. Although there is no geometry model of light sources in the recovered geometry model, the proposed hemispherical area illumination model can be used to recover scene appearance. Multiple experiments on both virtual and real-world datasets verify that the proposed approach subjectively and objectively performs better than the state-of-the-art method.

[1]  Gowri Somanath,et al.  HDR Environment Map Estimation for Real-Time Augmented Reality , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Yan Zhao,et al.  An Improved Augmented-Reality Framework for Differential Rendering Beyond the Lambertian-World Assumption , 2020, IEEE Transactions on Visualization and Computer Graphics.

[3]  Kavita Bala,et al.  Towards Learning-based Inverse Subsurface Scattering , 2020, 2020 IEEE International Conference on Computational Photography (ICCP).

[4]  Tian Guo,et al.  PointAR: Efficient Lighting Estimation for Mobile Augmented Reality , 2020, ECCV.

[5]  Pratul P. Srinivasan,et al.  Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Ravi Ramamoorthi,et al.  A differential theory of radiative transfer , 2019, ACM Trans. Graph..

[7]  Yannick Hold-Geoffroy,et al.  Deep Parametric Indoor Lighting Estimation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[8]  Shigang Wang,et al.  Illumination estimation for augmented reality based on a global illumination model , 2019, Multimedia Tools and Applications.

[9]  Kalyan Sunkavalli,et al.  Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Matthias Nießner,et al.  Inverse Path Tracing for Joint Material and Lighting Estimation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Jaakko Lehtinen,et al.  Differentiable Monte Carlo ray tracing through edge sampling , 2018, ACM Trans. Graph..

[12]  Kalyan Sunkavalli,et al.  Learning to reconstruct shape and spatially-varying reflectance from a single image , 2018, ACM Trans. Graph..

[13]  Ye Yu,et al.  InverseRenderNet: Learning Single Image Inverse Rendering , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Jian Shi,et al.  Learning Scene Illumination by Pairwise Photos from Rear and Front Mobile Cameras , 2018, Comput. Graph. Forum.

[15]  Bin Liu,et al.  Static Scene Illumination Estimation from Videos with Applications , 2017, Journal of Computer Science and Technology.

[16]  Ersin Yumer,et al.  Learning to predict indoor illumination from a single image , 2017, ACM Trans. Graph..

[17]  Éric Marchand,et al.  Reflectance and Illumination Estimation for Realistic Augmentations of Real Scenes , 2016, 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct).

[18]  Kun Zhou,et al.  Simultaneous Localization and Appearance Estimation with a Consumer RGB-D Camera , 2016, IEEE Transactions on Visualization and Computer Graphics.

[19]  Pieter Abbeel,et al.  Range sensor and silhouette fusion for high-quality 3D Scanning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Dieter Schmalstieg,et al.  Image-space illumination for augmented reality in dynamic environments , 2015, 2015 IEEE Virtual Reality (VR).

[21]  Michael J. Black,et al.  OpenDR: An Approximate Differentiable Renderer , 2014, ECCV.

[22]  Dieter Schmalstieg,et al.  Real-time photometric registration from arbitrary geometry , 2012, 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[23]  Soon Ki Jung,et al.  Estimation of Illuminants for Plausible Lighting in Augmented Reality , 2011, 2011 International Symposium on Ubiquitous Virtual Reality.

[24]  Nikos Paragios,et al.  Illumination estimation and cast shadow detection through a higher-order graphical model , 2011, CVPR 2011.

[25]  D. Samaras,et al.  Robust shadow and illumination estimation using a mixture model , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[26]  Naokazu Yokoya,et al.  Real-time estimation of light source environment for photorealistic augmented reality , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[27]  Paul E. Debevec,et al.  Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography , 1998, SIGGRAPH '08.

[28]  Yijun Ji,et al.  Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor , 2017 .