Learning from Internet: Handling Uncertainty in Robotic Environment Modeling

Uncertainty is a great challenge for environment perception of autonomous robots. For instance, while building semantic maps (i.e., maps with semantic labels such as object names), the robot may encounter unexpected objects of which it has no knowledge. It will lead to inevitable failures in traditional environment modeling software. The abundant knowledge being accumulated on the Internet has the potential to assist robots to handle such kind of uncertainly. However, existing researches have not touched this issue yet. This paper proposes a cloud-based semantic mapping engine named SemaCloud, which can not only augment robot's environment modeling capability by the rich cloud resources but also cope with uncertainty by utilizing the Internet knowledge on necessary. It adopts a state-of-art Deep Neural Network (DNN) for real-time and accurate recognition of pre-trained objects. If an object is beyond the knowledge of this DNN, a special mechanism named QoS-aware cloud phase transition is triggered to seek help from existing recognition services on the Internet. By a set of carefully-designed algorithms, it can maximize benefits and minimize the negative impacts on the Quality of Service (QoS) properties of robotic applications, which is essential to many robot scenarios. The experiments on both open datasets and real robots show that our work can handle uncertainly successfully in robotic semantic mapping without sacrificing critical real-time constraints.

[1]  Hugh F. Durrant-Whyte,et al.  Simultaneous localization and mapping: part I , 2006, IEEE Robotics & Automation Magazine.

[2]  Moritz Tenorth,et al.  RoboEarth Semantic Mapping: A Cloud Enabled Knowledge-Based Approach , 2015, IEEE Transactions on Automation Science and Engineering.

[3]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[4]  Joachim Hertzberg,et al.  Towards semantic maps for mobile robots , 2008, Robotics Auton. Syst..

[5]  Dumitru Erhan,et al.  Deep Neural Networks for Object Detection , 2013, NIPS.

[6]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Basit Qureshi,et al.  Performance of a Low Cost Hadoop Cluster for Image Analysis in Cloud Robotics Environment , 2016 .

[8]  Antonios Gasteratos,et al.  Semantic mapping for mobile robotics tasks: A survey , 2015, Robotics Auton. Syst..

[9]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Pieter Abbeel,et al.  Image Object Label 3 D CAD Model Candidate Grasps Google Object Recognition Engine Google Cloud Storage Select Feasible Grasp with Highest Success Probability Pose EstimationCamera Robots Cloud 3 D Sensor , 2014 .

[12]  John Fulcher,et al.  Computational Intelligence: An Introduction , 2008, Computational Intelligence: A Compendium.

[13]  Xavier Gallart Del Burgo Semantic Mapping in ROS , 2013 .

[14]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Nikolaos Papanikolopoulos,et al.  CORE: A Cloud-based Object Recognition Engine for robotics , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[16]  Huaimin Wang,et al.  Toward QoS-Aware Cloud Robotic Applications: A Hybrid Architecture and Its Implementation , 2016, 2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld).

[17]  Ross B. Girshick,et al.  Fast R-CNN , 2015, 1504.08083.