Multimodal Human-Robot Interface with Gesture-Based Virtual Collaboration

This paper proposes an intuitive teleoperation scheme by using human gesture in conjunction with multimodal human-robot interface. Further, in order to deal with the complication of dynamic daily environment, the authors apply haptic point cloud rendering and the virtual collaboration to the system. all these functions are achieved by a portable hardware that is proposed by authors newly, which is called “the mobile iSpace”. First, a surrounding environment of a teleoperated robot is captured and reconstructed as the 3D point cloud using a depth camera. Virtual world is then generated from the 3D point cloud, which a virtual teleoperated robot model is placed in. Operators use their own whole-body gesture to teleoperate the humanoid robot. The Gesture is captured in real time using the depth camera that was placed on operator side. The operator recieves both the visual and the vibrotactile feedback at the same time by using a head mounted display and a vibrotactile glove. All these system components, the human operator, the teleoperated robot and the feedback devices, are connected with the Internet-based virtual collaboration system for a flexible accessibility. This paper showcases the effectiveness of the proposed scheme with experiment that were done to show how the operators can access the remotely placed robot in anytime and place.