A Realtime Autonomous Robot Navigation Framework for Human like High-level Interaction and Task Planning in Global Dynamic Environment

In this paper, we present a framework for real-time autonomous robot navigation based on cloud and on-demand databases to address two major issues of human-like robot interaction and task planning in global dynamic environment, which is not known a priori. Our framework contributes to make human-like brain GPS mapping system for robot using spatial information and performs 3D visual semantic SLAM for independent robot navigation. We accomplish the feat by separating robot's memory system into Long-Term Memory (LTM) and Short-Term Memory (STM). We also form robot's behavior and knowledge system by linking these memories to Autonomous Navigation Module (ANM), Learning Module (LM), and Behavior Planner Module (BPM). The proposed framework is assessed through simulation using ROS-based Gazebo-simulated mobile robot, RGB-D camera (3D sensor) and a laser range finder (2D sensor) in 3D model of realistic indoor environment. Simulation corroborates the substantial practical merit of our proposed framework.

[1]  Rajesh Doriya,et al.  Development of a cloud-based RTAB-map service for robots , 2017, 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR).

[2]  Andrew J. Davison,et al.  DTAM: Dense tracking and mapping in real-time , 2011, 2011 International Conference on Computer Vision.

[3]  Iasonas Kokkinos,et al.  DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Gordon Wyeth,et al.  Place categorization and semantic mapping on a mobile robot , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  François Chollet,et al.  Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Paul H. J. Kelly,et al.  SLAM++: Simultaneous Localisation and Mapping at the Level of Objects , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  한보형,et al.  Learning Deconvolution Network for Semantic Segmentation , 2015 .

[9]  Rob Fergus,et al.  Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[10]  Daniel Cremers,et al.  Dense visual SLAM for RGB-D cameras , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Stefan Leutenegger,et al.  ElasticFusion: Dense SLAM Without A Pose Graph , 2015, Robotics: Science and Systems.

[12]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[13]  Patrick Benavidez,et al.  Cloud-based realtime robotic Visual SLAM , 2015, 2015 Annual IEEE Systems Conference (SysCon) Proceedings.

[14]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.