Human-Robot Interaction Using Pointing Gestures

Our research presented in this paper focuses on HRI (Human-Robot Interaction) via pointing gestures. We've designed a method where human operator points to a specific location and subsequently a mobile robot drives to the designated location. A depth sensor is used to capture operator's position and gesture. The key to our algorithm is 3D positions of operator's body joints. We further our previous research by analysis of two depth sensors, Kinect v1 and Kinect v2, in their ability to detect body joints.

[1]  Andrew Blake,et al.  Efficient Human Pose Estimation from Single Depth Images , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Yael Edan,et al.  Comparison of Interaction Modalities for Mobile Indoor Robot Guidance: Direct Physical Interaction, Person Following, and Pointing Control , 2015, IEEE Transactions on Human-Machine Systems.

[3]  Michal Tolgyessy,et al.  Foundations of Visual Linear Human–Robot Interaction via Pointing Gesture Navigation , 2017, Int. J. Soc. Robotics.

[4]  Jörg Stückler,et al.  Learning to interpret pointing gestures with a time-of-flight camera , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Maria Pateraki,et al.  Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[6]  Yoichiro Maeda,et al.  Evaluation of pointing navigation interface for mobile robot with spherical vision system , 2011, 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011).