A multi-modal perception based architecture for a non-intrusive domestic assistant robot

We present a multi-modal perception based architecture to realize a non-intrusive domestic assistant robot. The realized robot is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so automatically. All the robot's actions are based on multi-modal perceptions, which include: user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs.

[1]  Torbjørn S. Dahl,et al.  Robots in Health and Social Care: A Complementary Technology to Home Care and Telehealthcare? , 2013, Robotics.

[2]  Luc Van Gool,et al.  Random Forests for Real Time 3D Face Analysis , 2012, International Journal of Computer Vision.

[3]  Bastian Leibe,et al.  Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).