Peripheral Expansion of Depth Information via Layout Estimation with Fisheye Camera

Consumer RGB-D cameras have become very useful in the last years, but their field of view is too narrow for certain applications. We propose a new hybrid camera system composed by a conventional RGB-D and a fisheye camera to extend the field of view over 180\(^{\circ }\). With this system we have a region of the hemispherical image with depth certainty, and color data in the periphery that is used to extend the structural information of the scene. We have developed a new method to generate scaled layout hypotheses from relevant corners, combining the extraction of lines in the fisheye image and the depth information. Experiments with real images from different scenarios validate our layout recovery method and the advantages of this camera system, which is also able to overcome severe occlusions. As a result, we obtain a scaled 3D model expanding the original depth information with the wide scene reconstruction. Our proposal expands successfully the depth map more than eleven times in a single shot.

[1]  Joseph Schlecht,et al.  Sampling bedrooms , 2011, CVPR 2011.

[2]  Sanja Fidler,et al.  Box in the Box: Joint 3D Layout and Object Reasoning from Single Images , 2013, 2013 IEEE International Conference on Computer Vision.

[3]  Pascal Vasseur,et al.  A robust top-down approach for rotation estimation and vanishing points extraction by catadioptric vision in urban environment , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Silvio Savarese,et al.  Free your Camera: 3D Indoor Scene Understanding from Arbitrary Camera Motion , 2013, BMVC.

[5]  Roland Siegwart,et al.  Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Shigang Li,et al.  Estimating structure of indoor scene from a single full-view image , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[7]  David A. Forsyth,et al.  Thinking Inside the Box: Using Appearance Models and Context Based on Room Geometry , 2010, ECCV.

[8]  Josechu J. Guerrero,et al.  Automatic Line Extraction in Uncalibrated Omnidirectional Cameras with Revolution Symmetry , 2015, International Journal of Computer Vision.

[9]  Takeo Kanade,et al.  Geometric reasoning for single image structure recovery , 2009, CVPR.

[10]  Ian D. Reid,et al.  Manhattan scene understanding using monocular, stereo, and 3D features , 2011, 2011 International Conference on Computer Vision.

[11]  Luis Puig,et al.  Calibration of omnidirectional cameras in practice: A comparison of methods , 2012, Comput. Vis. Image Underst..

[12]  Shang-Hong Lai,et al.  Using line consistency to estimate 3D indoor Manhattan scene layout from a single image , 2015, 2015 IEEE International Conference on Image Processing (ICIP).

[13]  Honglak Lee,et al.  A Dynamic Bayesian Network Model for Autonomous 3D Reconstruction from a Single Indoor Image , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[14]  Javier González,et al.  Extrinsic calibration of a set of range cameras in 5 seconds without pattern , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Josechu J. Guerrero,et al.  A novel hybrid camera system with depth and fisheye cameras , 2016, 2016 23rd International Conference on Pattern Recognition (ICPR).

[16]  Yinda Zhang,et al.  PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding , 2014, ECCV.

[17]  Josechu J. Guerrero,et al.  Spatial layout recovery from a single omnidirectional image and its matching-free sequential propagation , 2014, Robotics Auton. Syst..

[18]  Marc Pollefeys,et al.  Efficient structured prediction for 3D indoor scene understanding , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[19]  Daniel Herrera C,et al.  Joint depth and color camera calibration with distortion correction. , 2012, IEEE transactions on pattern analysis and machine intelligence.

[20]  Wolfram Burgard,et al.  A catadioptric extension for RGB-D cameras , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Svetlana Lazebnik,et al.  Learning Informative Edge Maps for Indoor Scene Layout Prediction , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[22]  Jaishanker K. Pillai,et al.  Manhattan Junction Catalogue for Spatial Reasoning of Indoor Scenes , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[23]  Roland Siegwart,et al.  A Toolbox for Easily Calibrating Omnidirectional Cameras , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Derek Hoiem,et al.  Recovering the spatial layout of cluttered rooms , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[25]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[26]  Shigang Li,et al.  Estimating the Structure of Rooms from a Single Fisheye Image , 2013, 2013 2nd IAPR Asian Conference on Pattern Recognition.

[27]  Silvio Savarese,et al.  Understanding Indoor Scenes Using 3D Geometric Phrases , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[28]  Kostas Daniilidis,et al.  A Unifying Theory for Central Panoramic Systems and Practical Applications , 2000, ECCV.

[29]  Andreas Geiger,et al.  Automatic camera and range sensor calibration using a single shot , 2012, 2012 IEEE International Conference on Robotics and Automation.

[30]  Robert Pless,et al.  Extrinsic calibration of a camera and laser range finder (improves camera calibration) , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[31]  Yoshinori Kobayashi,et al.  Wide Field of View Kinect Undistortion for Social Navigation Implementation , 2012, ISVC.

[32]  Daniel Fried,et al.  Bayesian geometric modeling of indoor scenes , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[33]  Takeo Kanade,et al.  Estimating Spatial Layout of Rooms using Volumetric Reasoning about Objects and Surfaces , 2010, NIPS.

[34]  Alan L. Yuille,et al.  Manhattan World: compass direction from a single image by Bayesian inference , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.