Shadow compensation for outdoor perception

Outdoor robotic systems rely on perception modules that must be robust to variations in environmental conditions. In particular, vision-based perception systems are affected by illumination variations caused by occlusions. We propose an approach to calculate the lighting distribution of outdoor scenes. The new approach enables us to compensate for shadows and therefore obtain images which are invariant to the sun position and scene geometry, while also retaining the dimensionality of the original data. The method combines images with geometric information provided by range sensors to infer shadows. We select a pair of points on a shadow boundary from a single material and estimate a terrestrial sunlight-skylight ratio. Individual scaling factors are then calculated for all points based on their orientation and incident illumination sources. The result is a coloured point cloud that is independent of illumination variation due to occlusions and geometry. To demonstrate the effectiveness and generalisation of the approach, we present evaluations using two datasets with different cameras. The first uses a hyperspectral sensor that allows us to analyse the results for a large number of wavelengths, while the second dataset uses a standard RGB camera. The approach is shown to consistently provide good illumination compensation in both scenarios.

[1]  Christopher D. Manning,et al.  Introduction to Information Retrieval , 2010, J. Assoc. Inf. Sci. Technol..

[2]  Paul E. Debevec,et al.  Digitizing the Parthenon: Estimating Surface Reflectance Properties of a Complex Scene under Captured Natural Illumination , 2004, VMV.

[3]  Amit Banerjee,et al.  Improved atmospheric compensation of hyperspectral imagery using LIDAR , 2013, 2013 IEEE International Geoscience and Remote Sensing Symposium - IGARSS.

[4]  Paul Newman,et al.  Shady dealings: Robust, long-term visual localisation using illumination invariance , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Cheng Lu,et al.  On the removal of shadows from images , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Peter I. Corke,et al.  Dealing with shadows: Capturing intrinsic scene appearance for image-based outdoor localisation , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Jitendra Malik,et al.  Recovering photometric properties of architectural scenes from photographs , 1998, SIGGRAPH.

[8]  Robert A. Schowengerdt,et al.  Remote sensing, models, and methods for image processing , 1997 .

[9]  Paul Newman,et al.  Lighting invariant urban street classification , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Leonidas J. Guibas,et al.  The Earth Mover's Distance as a Metric for Image Retrieval , 2000, International Journal of Computer Vision.

[11]  I. Reda,et al.  Solar position algorithm for solar radiation applications , 2004 .

[12]  David Johnson,et al.  Multi‐Modal Sensor Calibration Using a Gradient Orientation Measure , 2015, J. Field Robotics.

[13]  Mei Han,et al.  Shadow removal for aerial imagery by information theoretic intrinsic image analysis , 2012, 2012 IEEE International Conference on Computational Photography (ICCP).

[14]  Peter K. Allen,et al.  Relighting acquired models of outdoor scenes , 2005, Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05).

[15]  Paul Debevec,et al.  Inverse global illumination: Recovering re?ectance models of real scenes from photographs , 1998 .

[16]  Paul Timothy Furgale,et al.  Visual Teach and Repeat using appearance-based lidar , 2011, 2012 IEEE International Conference on Robotics and Automation.