Image Guided Depth Super-Resolution for Spacewarp in XR Applications

In this paper, an image guided depth super-resolution approach is presented for generating higher-quality and higher-resolution depth maps from lower-resolution depth points, such that it can be used in depth based image re-projection (a.k.a: Spacewarp) and other applications in XR. In this approach, a higher-resolution depth map image is reconstructed using spatial, intensity, and depth information of neighborhood pixels. Three kinds of weights are computed with neighborhood information. The first one uses 3D pose information, the second one uses color image intensity information, and the third one uses color image spatial information. We compute the weighted sum of the depths in neighborhood and accumulate the weights. The depth candidate for the current considered point is determined by the weighted average of the depths in neighborhood. We build a criterion to find an optimal solution for the current pixel. After all pixels are processed, a higher-resolution depth map is obtained.Applications and result analysis for the approach are also presented in this paper. With different levels of down sampled depth points, higher-resolution depth maps are generated with a generally used regular grid interpolation approach and the image guided depth super-resolution algorithm presented in this paper. After comparing and analyzing, we can see that our algorithm can generate a higher-resolution depth map with more accurate and clearer objects and boundaries than generally used interpolation approaches.

[1]  Horst Bischof,et al.  ATGV-Net: Accurate Depth Super-Resolution , 2016, ECCV.

[2]  Daniel Cremers,et al.  Inferring Super-Resolution Depth from a Moving Light-Source Enhanced RGB-D Sensor: A Variational Approach , 2019, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV).

[3]  Leonard McMillan,et al.  Post-rendering 3D warping , 1997, SI3D.

[4]  Yo-Sung Ho,et al.  Depth map estimation from single-view image using object classification based on Bayesian learning , 2010, 2010 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video.

[5]  Zhao Chen,et al.  Estimating Depth from RGB and Sparse Sensing , 2018, ECCV.

[6]  Naoki Hashimoto,et al.  Depth Map Super-Resolution for Cost-Effective RGB-D Camera , 2015, 2015 International Conference on Cyberworlds (CW).

[7]  Yinda Zhang,et al.  Deep Depth Completion of a Single RGB-D Image , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[8]  Qiang Wu,et al.  Depth Super-Resolution on RGB-D Video Sequences With Large Displacement 3D Motion , 2018, IEEE Transactions on Image Processing.

[9]  Xueying Qin,et al.  Deeply Supervised Depth Map Super-Resolution as Novel View Synthesis , 2018, IEEE Transactions on Circuits and Systems for Video Technology.

[10]  Evgeny Burnaev,et al.  Perceptual Deep Depth Super-Resolution , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[11]  Feng Zheng,et al.  Spatio-temporal registration in augmented reality , 2015 .

[12]  Roberto Manduchi,et al.  Bilateral filtering for gray and color images , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[13]  Rajiv R. Sahay,et al.  PAG-Net: Progressive Attention Guided Depth Super-resolution Network , 2019, ArXiv.

[14]  Jr. Leonard McMillan,et al.  An Image-Based Approach to Three-Dimensional Computer Graphics , 1997 .

[15]  Rob Fergus,et al.  Depth Map Prediction from a Single Image using a Multi-Scale Deep Network , 2014, NIPS.

[16]  Xiang Cao,et al.  Joint convolutional neural pyramid for depth map super-resolution , 2018, ArXiv.

[17]  Ruigang Yang,et al.  Channel Attention Based Iterative Residual Learning for Depth Map Super-Resolution , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).