Anti-occlusion Light-Field Optical Flow Estimation Using Light-Field Super-Pixels

Optical flow estimation is one of the most important problem in community. However, current methods still can not provide reliable results in occlusion boundary areas. Light field cameras provide hundred of views in a single shot, so the ambiguity can be better analysed using other views. In this paper, we present a novel method for anti-occlusion optical flow estimation in a dynamic light field. We first model the light field superpixel (LFSP) as a slanted plane in 3D. Then the motion of the occluded pixels in central view slice can be optimized by the un-occluded pixels in other views. Thus the optical flow in occlusion boundary areas can be well computed. Experimental results on both synthetic and real light fields demonstrate the advantages over state-of-the-arts and the performance on 4D optical flow computation.

[1]  Konrad Schindler,et al.  3D Scene Flow Estimation with a Piecewise Rigid Scene Model , 2015, International Journal of Computer Vision.

[2]  Fatih Murat Porikli,et al.  Simultaneous Stereo Video Deblurring and Scene Flow Estimation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Alexei A. Efros,et al.  Depth Estimation with Occlusion Modeling Using Light-Field Cameras , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[5]  Qi Zhang,et al.  4D Light Field Superpixel and Segmentation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Qing Wang,et al.  Occlusion-Model Guided Antiocclusion Depth Estimation in Light Field , 2016, IEEE Journal of Selected Topics in Signal Processing.

[7]  Neus Sabater,et al.  Superrays for Efficient Light Field Processing , 2017, IEEE Journal of Selected Topics in Signal Processing.

[8]  Jitendra Malik,et al.  Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[10]  Sven Wanner,et al.  Variational Light Field Analysis for Disparity Estimation and Super-Resolution , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Andreas Geiger,et al.  Object scene flow for autonomous vehicles , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  E. Adelson,et al.  The Plenoptic Function and the Elements of Early Vision , 1991 .

[13]  Zhan Yu,et al.  Light Field Stereo Matching Using Bilateral Statistics of Surface Cameras , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Jitendra Malik,et al.  Depth from Combining Defocus and Correspondence Using Light-Field Cameras , 2013, 2013 IEEE International Conference on Computer Vision.

[15]  P. Hanrahan,et al.  Digital light field photography , 2006 .

[16]  Thomas Pock,et al.  Scene Flow Estimation from Light Fields via the Preconditioned Primal-Dual Algorithm , 2014, GCPR.

[17]  Neus Sabater,et al.  Dynamic Super-Rays for Efficient Light Field Video Processing , 2018, BMVC.

[18]  Ravi Ramamoorthi,et al.  Oriented Light-Field Windows for Scene Flow , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[19]  Bastian Goldlücke,et al.  A Dataset and Evaluation Methodology for Depth Estimation on 4D Light Fields , 2016, ACCV.

[20]  In-So Kweon,et al.  Accurate depth map estimation from a lenslet light field camera , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).