Perception-Action-Learning System for Mobile Social-Service Robots Using Deep Learning

We introduce a robust integrated perception-action-learning system for mobile social-service robots. The state-of-the-art deep learning techniques were incorporated into each module which significantly improves the performance in solving social service tasks. The system not only demonstrated fast and robust performance in a homelike environment but also achieved the highest score in the RoboCup2017@Home Social Standard Platform League (SSPL) held in Nagoya, Japan.

[1]  Siddhartha S. Srinivasa,et al.  The MOPED framework: Object recognition and pose estimation for manipulation , 2011, Int. J. Robotics Res..

[2]  Sven Wachsmuth,et al.  Deploying a modeling framework for reusable robot behavior to enable informed strategies for domestic service robots , 2014, Robotics Auton. Syst..

[3]  David Badre,et al.  Cognitive control, hierarchy, and the rostro–caudal organization of the frontal lobes , 2008, Trends in Cognitive Sciences.

[4]  Li Fei-Fei,et al.  DenseCap: Fully Convolutional Localization Networks for Dense Captioning , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Yaser Sheikh,et al.  OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Miguel A. Conde,et al.  A Motivational Architecture to Create more Human-Acceptable Assistive Robots for Robotics Competitions , 2016, 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC).

[7]  Michael Jones,et al.  An improved deep learning architecture for person re-identification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Ali Farhadi,et al.  YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).