Visual SLAM Based on Single Omnidirectional Views

This chapter focuses on the problem of Simultaneous Localization and Mapping (SLAM) using visual information from the environment. We exploit the versatility of a single omnidirectional camera to carry out this task. Traditionally, visual SLAM approaches concentrate on the estimation of a set of visual 3D points of the environment, denoted as visual landmarks. As the number of visual landmarks increases the computation of the map becomes more complex. In this work we suggest a different representation of the environment which simplifies the computation of the map and provides a more compact representation. Particularly, the map is composed by a reduced set of omnidirectional images, denoted as views, acquired at certain poses of the environment. Each view consists of a position and orientation in the map and a set of 2D interest points extracted from the image reference frame. The information gathered by these views is stored to find corresponding points between the current view captured at the current robot pose and the views stored in the map. Once a set of corresponding points is found, a motion transformation can be computed to retrieve the position of both views. This fact allows us to estimate the current pose of the robot and build the map. Moreover, with the intention of performing a more reliable approach, we propose a new method to find correspondences since it is a troublesome issue in this framework. Its basis relies on the generation of a gaussian distribution to propagate the current error on the map to the the matching process. We present a series of experiments with real data to validate the ideas and the SLAM solution proposed in this work.

[1]  Sebastian Thrun,et al.  FastSLAM: a factored solution to the simultaneous localization and mapping problem , 2002, AAAI/IAAI.

[2]  David Nister,et al.  Recent developments on direct relative orientation , 2006 .

[3]  Davide Scaramuzza,et al.  Performance evaluation of 1‐point‐RANSAC visual odometry , 2011, J. Field Robotics.

[4]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[5]  Patrick Rives,et al.  Bearing-only SAM using a Minimal Inverse Depth Parametrization - Application to Omnidirectional SLAM , 2010, ICINCO.

[6]  Ana Cristina Murillo,et al.  SURF features for efficient robot localization with omnidirectional images , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[7]  David Nistér,et al.  An efficient solution to the five-point relative pose problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Wolfram Burgard,et al.  Improved Rao-Blackwellized Mapping by Adaptive Sampling and Active Loop-Closure , 2004 .

[9]  Luis Payá,et al.  Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors , 2010, Sensors.

[10]  Myung Jin Chung,et al.  SLAM with omni-directional stereo vision sensor , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[11]  Ben J. A. Kröse,et al.  Visual odometry from an omnidirectional vision system , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[12]  Óscar Martínez Mozos,et al.  A comparative evaluation of interest point detectors and local descriptors for visual SLAM , 2010, Machine Vision and Applications.

[13]  Atsushi Yamashita,et al.  Construction of 3D Environment Model from an Omni-Directional Image Sequence , 2008 .

[14]  David W. Murray,et al.  Simultaneous Localization and Map-Building Using Active Vision , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Javier Civera,et al.  Inverse Depth Parametrization for Monocular SLAM , 2008, IEEE Transactions on Robotics.

[16]  Wolfram Burgard,et al.  Improving Data Association in Vision-based SLAM , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Roland Siegwart,et al.  Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC , 2009, 2009 IEEE International Conference on Robotics and Automation.

[18]  Wolfram Burgard,et al.  A Tree Parameterization for Efficiently Computing Maximum Likelihood Maps using Gradient Descent , 2007, Robotics: Science and Systems.