Collaborative Visual SLAM Framework for a Multi-Robot System

This paper presents a framework for collabora-tive visual SLAM using monocular cameras for a team of mobile robots. The robots perform SLAM individually using their on-board processors thereby estimating the seven degrees of freedom (including scale) for the motion of the camera and creating a map of the environment as a pose-graph of keyframes. Each robot communicates to a central server by sending local keyframe information. The central server merges them when a visual overlap is detected in the scene and creates a global map. In the background, the global map is continuously optimized using bundle adjustment techniques and the updated pose information is communicated back as feedback to the individual robots. We present some preliminary experimental results towards testing the framework with two mobile robots in an indoor environment.

[1]  Daniel Cremers,et al.  Semi-dense Visual Odometry for a Monocular Camera , 2013, 2013 IEEE International Conference on Computer Vision.

[2]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Laurent Kneip,et al.  Collaborative monocular SLAM with multiple Micro Aerial Vehicles , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Andrew Zisserman,et al.  Video Google: a text retrieval approach to object matching in videos , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[5]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[6]  Michel Devy,et al.  BiCamSLAM: Two times mono is more than stereo , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[7]  Raffaello D'Andrea,et al.  Rapyuta: The RoboEarth Cloud Engine , 2013, 2013 IEEE International Conference on Robotics and Automation.

[8]  Teresa A. Vidal-Calleja,et al.  Large scale multiple robot visual mapping with heterogeneous landmarks in semi-structured terrain , 2011, Robotics Auton. Syst..

[9]  Giorgio Grisetti,et al.  Using Augmented Measurements to Improve the Convergence of ICP , 2014, SIMPAR.

[10]  Frank Dellaert,et al.  Probabilistic structure matching for visual SLAM with a multi-camera rig , 2010, Comput. Vis. Image Underst..

[11]  Daniel Cremers,et al.  Robust odometry estimation for RGB-D cameras , 2013, 2013 IEEE International Conference on Robotics and Automation.

[12]  Sebastian Thrun,et al.  Robotic mapping: a survey , 2003 .

[13]  Xiaojun Wu,et al.  DAvinCi: A cloud computing framework for service robots , 2010, 2010 IEEE International Conference on Robotics and Automation.

[14]  Javier Civera,et al.  C2TAM: A Cloud framework for cooperative tracking and mapping , 2014, Robotics Auton. Syst..

[15]  Danping Zou,et al.  CoSLAM: Collaborative Visual SLAM in Dynamic Environments , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Berthold K. P. Horn,et al.  Closed-form solution of absolute orientation using unit quaternions , 1987 .

[17]  Daniel Cremers,et al.  LSD-SLAM: Large-Scale Direct Monocular SLAM , 2014, ECCV.

[18]  Frank Dellaert,et al.  DDF-SAM: Fully distributed SLAM using Constrained Factor Graphs , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[19]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[20]  Roland Siegwart,et al.  Collaborative stereo , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Wolfram Burgard,et al.  A Probabilistic Approach to Collaborative Multi-Robot Localization , 2000, Auton. Robots.

[22]  Wolfram Burgard,et al.  G2o: A general framework for graph optimization , 2011, 2011 IEEE International Conference on Robotics and Automation.

[23]  Paul Newman,et al.  FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance , 2008, Int. J. Robotics Res..