Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model

When describing a physical object, we indicate which object by pointing and using reference terms, such as ‘this’ and ‘that’, to inform the listener quickly of an indicated object's location. Therefore, this research proposes using a three-layer attention-drawing model for humanoid robots that incorporates such gestures and verbal cues. The proposed three-layer model consists of three sub-models: the Reference Term Model (RTM); the Limit Distance Model (LDM); and the Object Property Model (OPM). The RTM selects an appropriate reference term for distance, based on a quantitative analysis of human behaviour. The LDM decides whether to use a property of the object, such as colour, as an additional term for distinguishing the object from its neighbours. The OPM determines which property should be used for this additional reference. Based on this concept, an attention-drawing system was developed for a communication robot named ‘Robovie’, and its effectiveness was tested.

[1]  Tetsuo Ono,et al.  Physical relation and expression: joint attention for human-robot interaction , 2003, IEEE Trans. Ind. Electron..

[2]  Yoshinori Kuno,et al.  Understanding Inexplicit Utterances for Helper Robots Using Vision , 2005 .

[3]  Jannik Fritsch,et al.  A multi-modal object attention system for a mobile robot , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Andrea Lockerd Thomaz,et al.  Effects of nonverbal communication on efficiency and robustness in human-robot teamwork , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  K. MacDorman,et al.  Subjective Ratings of Robot Video Clips for Human Likeness, Familiarity, and Eeriness: An Exploration of the Uncanny Valley , 2006 .

[6]  Masayuki Inaba,et al.  PEXIS: Probabilistic experience representation based adaptive interaction system for personal robots , 2004, Systems and Computers in Japan.

[7]  Aaron Powers,et al.  Matching robot appearance and behavior to tasks to improve human-robot cooperation , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[8]  H. Ishiguro,et al.  Opening Pandora’s Box , 2020, Marriage Equality.

[9]  Takayuki Kanda,et al.  Interactive Humanoid Robots for a Science Museum , 2006, IEEE Intelligent Systems.

[10]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[11]  Takayuki Kanda,et al.  Analysis of Humanoid Appearances in Human–Robot Interaction , 2005, IEEE Transactions on Robotics.

[12]  C. Moore,et al.  Joint attention : its origins and role in development , 1995 .

[13]  Noboru Ohnishi,et al.  Informing a Robot of Object Location with Both Hand-Gesture and Verbal Cues , 2003 .

[14]  Tsutomu Miyasato,et al.  Physical Constraints on Human Robot Interaction , 1999, IJCAI.

[15]  D. McNeill Psycholinguistics: A New Approach , 1987 .

[16]  Tetsuo Ono,et al.  Embodied cooperative behaviors by an autonomous humanoid robot , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[17]  H. Ishiguro,et al.  The uncanny advantage of using androids in cognitive and social science research , 2006 .

[18]  David Lee,et al.  Exploratory studies on social spaces between humans and a mechanical-looking robot , 2006, Connect. Sci..

[19]  Hiroshi Ishiguro,et al.  Android science: Toward a new cross-interdisciplinary framework , 2005 .

[20]  Samuel Fillenbaum,et al.  Psycholinguistics: A New Approach , 1987 .

[21]  Brian Scassellati,et al.  Infant-like Social Interactions between a Robot and a Human Caregiver , 2000, Adapt. Behav..

[22]  Tetsuo Ono,et al.  Cooperative embodied communication emerged by interactive humanoid robots , 2004, RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759).

[23]  Hiroshi Ishiguro,et al.  Android science: conscious and subconscious recognition , 2006, Connect. Sci..

[24]  Yoshinori Kuno,et al.  Understanding inexplicit utterances using vision for helper robots , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[25]  Tetsuo Ono,et al.  Development and evaluation of interactive humanoid robots , 2004, Proceedings of the IEEE.

[26]  Brian Scassellati,et al.  Investigating models of social development using a humanoid robot , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[27]  E. Vatikiotis-Bateson,et al.  Communicative criteria for processing time/space-varying information , 2001, Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591).

[28]  Yukie Nagai,et al.  Learning to comprehend deictic gestures in robots and human infants , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[29]  K. Nakadai,et al.  Real-Time Auditory and Visual Multiple-Object Tracking for Robots , 2001, IJCAI 2001.

[30]  Cynthia Breazeal,et al.  Effect of a robot on user perceptions , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[31]  David Lee,et al.  Close encounters: spatial distances between people and a robot of mechanistic appearance , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..