Cloud VR system with immersive interfaces to collect human gaze-controls and interpersonal-behaviors

In this study, we present a cloud VR system with immersive interfaces for collecting human gaze-controls and interpersonal-behaviors. Human beings can log in to a VR space and naturally communicate with each other and a robot by using immersive interfaces. An HMD (Head Mounted Display) device and a Kinect sensor provide immersive visualization and motion control, respectively. If users have these devices, they can attend experiments with human-robot interactions in the VR space from around the world. Human gaze-controls and interpersonal-behaviors are collected to database while human beings and a robot interact with each other. The proposed VR system enables to collect huge number of human gaze-controls and interpersonal-behaviors by using immersive interfaces. Application experiments to learn object attributes from human beings and to observe greetings by two persons demonstrate the effectiveness of the proposed VR system to collect human gaze-controls and interpersonal-behaviors.

[1]  Yuichiro Yoshikawa,et al.  Simulator platform that enables social interaction simulation — SIGVerse: SocioIntelliGenesis simulator , 2010, 2010 IEEE/SICE International Symposium on System Integration.

[2]  Guillaume Bosc,et al.  Mining Balanced Sequential Patterns in RTS Games , 2014, ECAI.

[3]  Tomoaki Nakamura,et al.  Playmate robots that can act according to a child's mental state , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Takayuki Kanda,et al.  Developing a model of robot behavior to identify and appropriately respond to implicit attention-shifting , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Tatsuo Arai,et al.  Comparative evaluation of virtual and real humanoid with robot-oriented psychology scale , 2011, 2011 IEEE International Conference on Robotics and Automation.

[6]  Norihiro Hagita,et al.  Collaborative capturing, interpreting, and sharing of experiences , 2006, Personal and Ubiquitous Computing.

[7]  Satoshi Nakamura,et al.  Learning, Generation and Recognition of Motions by Reference-Point-Dependent Probabilistic Models , 2011, Adv. Robotics.

[8]  Tomoki Toda,et al.  Learning Novel Objects for Extended Mobile Manipulation , 2012, J. Intell. Robotic Syst..

[9]  Manfred Tscheligi,et al.  Exploring human-robot cooperation possibilities for semiconductor manufacturing , 2011, 2011 International Conference on Collaboration Technologies and Systems (CTS).

[10]  Yoko Yamakata,et al.  Belief network based disambiguation of object reference in spoken dialogue system for robot , 2002, INTERSPEECH.