Identifying Free Space in a Robot Bird-Eye View

Free space detection based on visual clues is an upcoming approach in robotics. Our working domain is the Virtual Rescue League of the RoboCup. In this domain efficient obstacle avoidance is crucial to find victims under challenging conditions. In this study a machine-learning approach is applied to distinguish the difference in visual appearance of obstacles and free space. Omnidirectional camera images are transformed to bird-eye view, which makes comparison with local occupancy maps possible. Bird-eye view images are automatically labeled using Laser Range information, allowing completely autonomous and continuous learning of accurate color models. Two color-based models are compared; a Histogram Method and a Gaussian Mixture Model. Both methods achieve very good performances, with results in a high precision and recall on a typical map from the Rescue League. The Gaussian Mixture Model achieves the best scores with much less parameters on this map, but is beaten by the Histogram Method on real data collected by our Nomad robot. Additionally, the importance of the right color normalization scheme and model parameters is demonstrated in this study.

[1]  A. Weitzenfeld,et al.  An Omnidirectional Camera Simulation for the USARSim World , 2008 .

[2]  Ashutosh Saxena,et al.  High speed obstacle avoidance using monocular vision and reinforcement learning , 2005, ICML.

[3]  Geoffrey J. McLachlan,et al.  Finite Mixture Models , 2019, Annual Review of Statistics and Its Application.

[4]  Carlo Gatta,et al.  Color correction between gray world and white patch , 2002, IS&T/SPIE Electronic Imaging.

[5]  Illah R. Nourbakhsh,et al.  Appearance-Based Obstacle Detection with Monocular Color Vision , 2000, AAAI/IAAI.

[6]  Peter Stone,et al.  Structure-based color learning on a mobile robot under changing illumination , 2007, Auton. Robots.

[7]  Shree K. Nayar,et al.  Catadioptric omnidirectional camera , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[8]  Nils J. Nilsson,et al.  Shakey the Robot , 1984 .

[9]  Horst-Michael Groß,et al.  Contribution to vision-based localization, tracking and navigation methods for an interactive mobile service-robot , 2001, 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236).

[10]  Arnold W. M. Smeulders,et al.  Color-based object recognition , 1997, Pattern Recognit..

[11]  Stefano Carpin,et al.  Bridging the Gap Between Simulation and Reality in Urban Search and Rescue , 2006, RoboCup.

[12]  Shree K. Nayar,et al.  Ego-motion and omnidirectional cameras , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[13]  Dima Damen,et al.  Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Christian Berger,et al.  Caroline: An autonomously driving vehicle for urban environments , 2008, J. Field Robotics.

[15]  David J. Hand,et al.  ROC Curves for Continuous Data , 2009 .

[16]  James M. Rehg,et al.  Statistical Color Models with Application to Skin Detection , 2004, International Journal of Computer Vision.

[17]  James M. Rehg,et al.  Learning from examples in unstructured, outdoor environments , 2006, J. Field Robotics.

[18]  Ben J. A. Kröse,et al.  From images to rooms , 2007, Robotics Auton. Syst..

[19]  Sebastian Thrun,et al.  Stanley: The robot that won the DARPA Grand Challenge , 2006, J. Field Robotics.

[20]  Arnoud Visser,et al.  An Omnidirectional Camera Simulation for the USARSim World , 2008, RoboCup.

[21]  Bernhard Rumpe,et al.  Caroline: An autonomously driving vehicle for urban environments , 2008 .