Natural deictic communication with humanoid robots

A simple view of deictic communication only includes the indication process and recognition process: a person points at an object and says something about it such as "look at this," and then the other person recognizes the pointing gesture and pays attention to the indicated object. However, this simple view lacks three important processes: attention synchronization, context focus, and believability establishment. We refer to these three processes as "facilitation processes" and implement them in a humanoid robot with a motion capturing system. An experiment with 30 subjects revealed that the facilitation processes make deictic communication natural.

[1]  Tetsuo Ono,et al.  Body Movement Analysis of Human-Robot Interaction , 2003, IJCAI.

[2]  Munindar P. Singh,et al.  Conversational Agents , 1997 .

[3]  Masayuki Inaba,et al.  PEXIS: Probabilistic experience representation based adaptive interaction system for personal robots , 2004, Systems and Computers in Japan.

[4]  Takayuki Kanda,et al.  Three-layered draw-attention model for humanoid robots with gestures and verbal cues , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  S. Sugano,et al.  Emotional communication between humans and the autonomous robot which has the emotion model , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[6]  Tetsuo Ono,et al.  Development and evaluation of interactive humanoid robots , 2004, Proceedings of the IEEE.

[7]  Takayuki Kanda,et al.  Three-Layer Model for Generation and Recognition of Attention-Drawing Behavior , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Brian Scassellati,et al.  Investigating models of social development using a humanoid robot , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[9]  Tomio Watanabe,et al.  InterRobot: speech-driven embodied interaction robot , 2001, Adv. Robotics.

[10]  C. Moore,et al.  Joint attention : its origins and role in development , 1995 .

[11]  K. Nakadai,et al.  Real-Time Auditory and Visual Multiple-Object Tracking for Robots , 2001, IJCAI 2001.

[12]  Jannik Fritsch,et al.  A multi-modal object attention system for a mobile robot , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Andrea Lockerd Thomaz,et al.  Effects of nonverbal communication on efficiency and robustness in human-robot teamwork , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Tetsuo Ono,et al.  Embodied cooperative behaviors by an autonomous humanoid robot , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[15]  Brian Scassellati,et al.  Infant-like Social Interactions between a Robot and a Human Caregiver , 2000, Adapt. Behav..

[16]  Satoshi Nakamura,et al.  Robust Speech Recognition System for Communication Robots in Real Environments , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[17]  Yoshinori Kuno,et al.  Understanding inexplicit utterances using vision for helper robots , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..