Semantic map for service robot navigation based on ROS

The environment perception and navigation ability of robots are the basis of robot interaction with environment. The scene understanding is the prerequisite of robot autonomous navigation and navigation is the ultimate goal of robot scene understanding, therefore, a mobile robot semantic map navigation system is designed. Firstly, a dense pointcloud map is constructed by using a deep convolutional neural network. Then, the mapping relationship between the depth camera data and the laser data is proposed, for purpose of realizing their interconversion. And a method of constructing two-dimensional raster map by simulating laser data by using the data of the depth camera is proposed. Finally, a method that semantic information is integrated into the two-dimensional grid map is proposed, realizing the semantic navigation of the service robot. The experimental results showed that compared with other robotic navigation systems, the proposed algorithm can better realize the interactions between service robots and people and the environment.

[1]  Michael Milford,et al.  Meaningful maps with object-oriented semantic mapping , 2016, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[3]  Sunando Sengupta,et al.  Semantic octree: Unifying recognition, reconstruction and representation via an octree constrained higher order MRF , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Ian D. Reid,et al.  A fast, modular scene understanding system using context-aware object detection , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Ian D. Reid,et al.  Dense Reconstruction Using 3D Object Shape Priors , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Ian D. Reid,et al.  Manhattan scene understanding using monocular, stereo, and 3D features , 2011, 2011 International Conference on Computer Vision.

[7]  Ren C. Luo,et al.  Hierarchical Semantic Mapping Using Convolutional Neural Networks for Intelligent Service Robotics , 2018, IEEE Access.

[8]  Wolfram Burgard,et al.  Semantic labeling of places using information extracted from laser and vision sensor data , 2006 .

[9]  John J. Leonard,et al.  Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age , 2016, IEEE Transactions on Robotics.

[10]  Xiaogang Wang,et al.  Pyramid Scene Parsing Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  David W. Murray,et al.  Towards simultaneous recognition, localization and mapping for hand-held and wearable cameras , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[12]  Silvio Savarese,et al.  Semantic structure from motion with points, regions, and objects , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Stefan Leutenegger,et al.  ElasticFusion: Real-time dense SLAM and light source estimation , 2016, Int. J. Robotics Res..

[14]  Patric Jensfelt,et al.  Large-scale semantic mapping and reasoning with heterogeneous modalities , 2012, 2012 IEEE International Conference on Robotics and Automation.

[15]  Patrick Pérez,et al.  Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).