SLAM Using 3D reconstruction via a visual RGB and RGB-D sensory input

This paper investigates simultaneous localization and mapping (SLAM) problem by exploiting the Microsoft Kinect™ sensor array and an autonomous mobile robot capable of self-localization. The combination of them covers the major features of SLAM including mapping, sensing, locating, and modeling. The Kinect™ sensor array provides a dual camera output of RGB, using a CMOS camera, and RGB-D, using a depth camera. The sensors will be mounted on the KCLBOT, an autonomous nonholonomic two wheel maneuverable mobile robot. The mobile robot platform has the ability to self-localize and preform navigation maneuvers to traverse to set target points using intelligent processes. The target point for this operation is a fixed coordinate position, which will be the goal for the mobile robot to reach, taking into consideration the obstacles in the environment which will be represented in a 3D spatial model. Extracting the images from the sensor after a calibration routine, a 3D reconstruction of the traversable environment is produced for the mobile robot to navigate. Using the constructed 3D model the autonomous mobile robot follows a polynomial-based nonholonomic trajectory with obstacle avoidance. The experimental results demonstrate the cost effectiveness of this off the shelf sensor array. The results show the effectiveness to produce a 3D reconstruction of an environment and the feasibility of using the Microsoft Kinect™ sensor for mapping, sensing, locating, and modeling, that enables the implementation of SLAM on this type of platform.Copyright © 2011 by ASME

[1]  C. V. Jawahar,et al.  Vision based navigation for mobile robots in indoor environment by teaching and playing-back scheme , 2001 .

[2]  B. Amutha,et al.  Mobile Assistant as a Navigational Aid for Blind Children to identify Landm , 2009 .

[3]  Evangelos Papadopoulos,et al.  On Path Planning and Obstacle Avoidance for Nonholonomic Platforms with Manipulators: A Polynomial Approach , 2002, Int. J. Robotics Res..

[4]  Benjamin Kuipers,et al.  A stereo vision based mapping algorithm for detecting inclines, drop-offs, and obstacles for safe local navigation , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Chih-Lyang Hwang,et al.  A Distributed Active-Vision Network-Space Approach for the Navigation of a Car-Like Wheeled Robot , 2009, IEEE Transactions on Industrial Electronics.

[6]  José Santos-Victor,et al.  Vision-based navigation and environmental representations with an omnidirectional camera , 2000, IEEE Trans. Robotics Autom..

[7]  Stefano Cagnoni,et al.  Omnidirectional stereo systems for robot navigation , 2003, 2003 Conference on Computer Vision and Pattern Recognition Workshop.

[8]  Jean Scholtz,et al.  Evaluation of human-robot interaction awareness in search and rescue , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[9]  Shaoning Pang,et al.  Vision Based Mobile Robot for Indoor Environmental Security , 2008, ICONIP.

[10]  Tarun Kumar,et al.  A Theory Based on Conversion of RGB image to Gray image , 2010 .

[11]  Masahiro Fujita,et al.  3D Perception and Environment Map Generation for Humanoid Robot Navigation , 2008, Int. J. Robotics Res..

[12]  Zhichao Chen,et al.  Qualitative vision-based mobile robot navigation , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[13]  Ray A. Jarvis,et al.  Eye-Full Tower: A GPU-based variable multibaseline omnidirectional stereovision system with automatic baseline selection for outdoor mobile robot navigation , 2010, Robotics Auton. Syst..

[14]  Takeshi Ohashi,et al.  Obstacle avoidance and path planning for humanoid robots using stereo vision , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[15]  A. Davids Urban search and rescue robots: from tragedy to technology , 2002 .

[16]  Huosheng Hu,et al.  3D Laser range scanner with hemispherical field of view for robot navigation , 2008, 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics.

[17]  Georges Bastin,et al.  Structural properties and classification of kinematic and dynamic models of wheeled mobile robots , 1996, IEEE Trans. Robotics Autom..

[18]  Donald P. Greenberg,et al.  Color spaces for computer graphics , 1978, SIGGRAPH.

[19]  Bingrong Hong,et al.  Novel indoor mobile robot navigation using monocular vision , 2008, Eng. Appl. Artif. Intell..

[20]  Amitava Chatterjee,et al.  A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors , 2011 .

[21]  Avinash C. Kak,et al.  Vision for Mobile Robot Navigation: A Survey , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[22]  Alvy Ray Smith,et al.  Color gamut transform pairs , 1978, SIGGRAPH.