Dense Light Field Reconstruction From Sparse Sampling Using Residual Network

A light field records numerous light rays from a real-world scene. However, capturing a dense light field by existing devices is a time-consuming process. Besides, reconstructing a large amount of light rays equivalent to multiple light fields using sparse sampling arises a severe challenge for existing methods. In this paper, we present a learning based method to reconstruct multiple novel light fields between two mutually independent light fields. We indicate that light rays distributed in different light fields have the same consistent constraints under a certain condition. The most significant constraint is a depth related correlation between angular and spatial dimensions. Our method avoids working out the error-sensitive constraint by employing a deep neural network. We solve residual values of pixels on epipolar plane image (EPI) to reconstruct novel light fields. Our method is able to reconstruct 2 to 4 novel light fields between two mutually independent input light fields. We also compare our results with those yielded by a number of alternatives elsewhere in the literature, which shows our reconstructed light fields have better structure similarity and occlusion relationship.

[1]  Qing Wang,et al.  A Generic Multi-Projection-Center Model and Calibration Method for Light Field Cameras , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Qionghai Dai,et al.  Light field from micro-baseline image pair , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Clemens Birklbauer,et al.  Directional Super-Resolution by Means of Coded Sampling and Guided Upsampling , 2015, 2015 IEEE International Conference on Computational Photography (ICCP).

[4]  Qing Wang,et al.  Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields , 2017, IEEE Transactions on Image Processing.

[5]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[6]  In-So Kweon,et al.  Learning a Deep Convolutional Network for Light-Field Image Super-Resolution , 2015, 2015 IEEE International Conference on Computer Vision Workshop (ICCVW).

[7]  Shi-Min Hu,et al.  PlenoPatch: Patch-Based Plenoptic Image Manipulation , 2017, IEEE Transactions on Visualization and Computer Graphics.

[8]  Frédo Durand,et al.  Light Field Reconstruction Using Sparsity in the Continuous Fourier Domain , 2014, ACM Trans. Graph..

[9]  Sven Wanner,et al.  Variational Light Field Analysis for Disparity Estimation and Super-Resolution , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Lipeng Si,et al.  Dense Depth-Map Estimation and Geometry Inference from Light Fields via Global Optimization , 2016, ACCV.

[11]  Alexei A. Efros,et al.  Depth Estimation with Occlusion Modeling Using Light-Field Cameras , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[13]  Alexei A. Efros,et al.  Light field video capture using a learning-based hybrid imaging system , 2017, ACM Trans. Graph..

[14]  Sven Wanner,et al.  Globally consistent depth labeling of 4D light fields , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Frédo Durand,et al.  Linear view synthesis using a dimensionality gap light field prior , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[16]  Qionghai Dai,et al.  Light Field Reconstruction Using Deep Convolutional Network on EPI , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Qi Zhang,et al.  4D Light Field Superpixel and Segmentation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Haibin Ling,et al.  Saliency Detection on Light Field , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  P. Hanrahan,et al.  Digital light field photography , 2006 .

[20]  Qing Wang,et al.  Occlusion-Model Guided Antiocclusion Depth Estimation in Light Field , 2016, IEEE Journal of Selected Topics in Signal Processing.

[21]  In-So Kweon,et al.  Accurate depth map estimation from a lenslet light field camera , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[23]  Qi Zhang,et al.  Extending the FOV from disparity and color consistencies in multiview light fields , 2017, 2017 IEEE International Conference on Image Processing (ICIP).

[24]  Ravi Ramamoorthi,et al.  Learning to Synthesize a 4D RGBD Light Field from a Single Image , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[25]  Ting-Chun Wang,et al.  Learning-based view synthesis for light field cameras , 2016, ACM Trans. Graph..

[26]  Gordon Wetzstein,et al.  Compressive light field photography using overcomplete dictionaries and optimized projections , 2013, ACM Trans. Graph..

[27]  Marc Levoy,et al.  High performance imaging using large camera arrays , 2005, ACM Trans. Graph..

[28]  Robert C. Bolles,et al.  Epipolar-plane image analysis: An approach to determining structure from motion , 1987, International Journal of Computer Vision.

[29]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).