PAG-Net: Progressive Attention Guided Depth Super-resolution Network

In this paper, we propose a novel method for the challenging problem of guided depth map super-resolution, called PAGNet. It is based on residual dense networks and involves the attention mechanism to suppress the texture copying problem arises due to improper guidance by RGB images. The attention module mainly involves providing the spatial attention to guidance image based on the depth features. We evaluate the proposed trained models on test dataset and provide comparisons with the state-of-the-art depth super-resolution methods.

[1]  Huazhu Fu,et al.  Hierarchical Features Driven Residual Learning for Depth Map Super-Resolution , 2019, IEEE Transactions on Image Processing.

[2]  Yao Wang,et al.  Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive Model , 2014, IEEE Transactions on Image Processing.

[3]  D. Scharstein,et al.  A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms , 2001, Proceedings IEEE Workshop on Stereo and Multi-Baseline Vision (SMBV 2001).

[4]  Yun Fu,et al.  Residual Dense Network for Image Super-Resolution , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[5]  Yuming Fang,et al.  Residual dense network for intensity-guided depth map enhancement , 2019, Inf. Sci..

[6]  Rogério Schmidt Feris,et al.  Edge guided single depth image super resolution , 2014, 2014 IEEE International Conference on Image Processing (ICIP).

[7]  Juho Kannala,et al.  Joint Depth and Color Camera Calibration with Distortion Correction , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Zhifeng Chen,et al.  Multi-Scale Frequency Reconstruction for Guided Depth Map Super-Resolution via Deep Residual Network , 2020, IEEE Transactions on Circuits and Systems for Video Technology.

[9]  Martin Kleinsteuber,et al.  A Joint Intensity and Depth Co-sparse Analysis Model for Depth Map Super-resolution , 2013, 2013 IEEE International Conference on Computer Vision.

[10]  Xiaoou Tang,et al.  Depth Map Super-Resolution by Deep Multi-Scale Guidance , 2016, ECCV.

[11]  Qiang Wu,et al.  Robust Color Guided Depth Map Restoration , 2017, IEEE Transactions on Image Processing.

[12]  Horst Bischof,et al.  Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation , 2013, 2013 IEEE International Conference on Computer Vision.

[13]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Richard Szeliski,et al.  A Comparative Study of Energy Minimization Methods for Markov Random Fields with Smoothness-Based Priors , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  R. Lange,et al.  Solid-state time-of-flight range camera , 2001 .

[16]  Jingyu Yang,et al.  Depth Super-Resolution From RGB-D Pairs With Transform and Spatial Domain Regularization , 2018, IEEE Transactions on Image Processing.

[17]  Jian Sun,et al.  Guided Image Filtering , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Michael S. Brown,et al.  High-Quality Depth Map Upsampling and Completion for RGB-D Cameras , 2014, IEEE Transactions on Image Processing.

[19]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Ping Li,et al.  Deep Color Guided Coarse-to-Fine Convolutional Network Cascade for Depth Image Super-Resolution , 2019, IEEE Transactions on Image Processing.

[21]  Michael J. Black,et al.  A Naturalistic Open Source Movie for Optical Flow Evaluation , 2012, ECCV.

[22]  Xueying Qin,et al.  Deeply Supervised Depth Map Super-Resolution as Novel View Synthesis , 2018, IEEE Transactions on Circuits and Systems for Video Technology.