Initial RTAB-Map Navigation Analysis for Service Robot

A service robot need to have a capability to navigate accurately and to predict their movement that nailed in a reasonable theoretical background. In a robot navigation system, mapping is the most important aspect that must exist and should be reliable to support its normal operation. Moreover, up until now, the demand of a good mapping still become the ultimate challenge for robotic developer and scientist, as the change of environment that caused by the movements of objects which, then, change the landscape setting. We also aim to provide a good mapping for an autonomous robot configuration. In this work, the RTAB-Map provides maps that continuously creates point-cloud. This paper aims to make use the Real-Time Based-Mapping (RTAB-Map) to create map for the operation of a robot that managed by an Open Source Robotic Operating System (ROS) in Turtlebot 2 and 3. Our results show that the RTAB-Map provides a promising guidance for service robot navigation.

[1]  Xiaojuan Ma,et al.  Design and Evaluation of Service Robot's Proactivity in Decision-Making Support Process , 2019, CHI.

[2]  François Michaud,et al.  Long-term online multi-session graph-based SPLAM with memory management , 2017, Autonomous Robots.

[3]  François Michaud,et al.  RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation , 2018, J. Field Robotics.

[4]  Sebastian Thrun,et al.  Simultaneous Localization and Mapping , 2008, Robotics and Cognitive Approaches to Spatial Mapping.

[5]  Paul Newman,et al.  Loop closure detection in SLAM by combining visual and spatial appearance , 2006, Robotics Auton. Syst..

[6]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[7]  Krzysztof Walas,et al.  Lightweight RGB-D SLAM System for Search and Rescue Robots , 2015, Progress in Automation, Robotics and Measuring Techniques.

[8]  Yoshihiko Nakamura,et al.  PoseFusion: Dense RGB-D SLAM in Dynamic Human Environments , 2018, ISER.

[9]  Zhun Fan,et al.  Service robots for hospitals: A case study of transportation tasks in a hospital , 2009, 2009 IEEE International Conference on Automation and Logistics.

[10]  Tony Lindeberg,et al.  Scale Invariant Feature Transform , 2012, Scholarpedia.

[11]  F. Michaud,et al.  Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation , 2013, IEEE Transactions on Robotics.

[12]  Adam Schmidt,et al.  Toward evaluation of visual navigation algorithms on RGB-D data from the first- and second-generation Kinect , 2016, Machine Vision and Applications.

[13]  Wolfram Burgard,et al.  An evaluation of the RGB-D SLAM system , 2012, 2012 IEEE International Conference on Robotics and Automation.

[14]  Christopher Hunt,et al.  Notes on the OpenSURF Library , 2009 .

[15]  Cyrill Stachniss,et al.  Simultaneous Localization and Mapping , 2016, Springer Handbook of Robotics, 2nd Ed..

[16]  Andrew Zisserman,et al.  Video Google: a text retrieval approach to object matching in videos , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.