Identifying the addressee in human-human-robot interactions based on head pose and speech

In this work we investigate the power of acoustic and visual cues, and their combination, to identify the addressee in a human-human-robot interaction. Based on eighteen audio-visual recordings of two human beings and a (simulated) robot we discriminate the interaction of the two humans from the interaction of one human with the robot. The paper compares the result of three approaches. The first approach uses purely acoustic cues to find the addressees. Low level, feature based cues as well as higher-level cues are examined. In the second approach we test whether the human's head pose is a suitable cue. Our results show that visually estimated head pose is a more reliable cue for the identification of the addressee in the human-human-robot interaction. In the third approach we combine the acoustic and visual cues which results in significant improvements.

[1]  Magdalena D. Bugajska,et al.  Building a Multimodal Human-Robot Interface , 2001, IEEE Intell. Syst..

[2]  Jie Zhu,et al.  Head orientation and gaze direction in meetings , 2002, CHI Extended Abstracts.

[3]  Hagen Soltau,et al.  The 2003 ISL rich transcription system for conversational telephony speech , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[4]  Arvin Agah,et al.  Human interactions with intelligent systems: research taxonomy , 2000, Comput. Electr. Eng..

[5]  Rainer Stiefelhagen,et al.  Tracking and modeling focus of attention in meetings , 2002 .

[6]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[7]  J. Tankard Effects of Eye Position on Person Perception , 1970, Perceptual and motor skills.

[8]  A. Bugra Koku,et al.  Towards socially acceptable robots , 2000, Smc 2000 conference proceedings. 2000 ieee international conference on systems, man and cybernetics. 'cybernetics evolving to systems, humans, organizations, and their complex interactions' (cat. no.0.

[9]  Brian Scassellati,et al.  Humanoid Robots: A New Kind of Tool , 2000, IEEE Intell. Syst..

[10]  Jacques M. B. Terken,et al.  Facial Orientation During Multi-party Interaction with Information Kiosks , 2003, INTERACT.

[11]  C. Kleinke,et al.  Effects of self-attributed and other-attributed gaze on interpersonal evaluations between males and females ☆ , 1973 .

[12]  Shumin Zhai,et al.  Gaze and Speech in Attentive User Interfaces , 2000, ICMI.

[13]  Rainer Stiefelhagen,et al.  Tracking focus of attention in meetings , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[14]  Hagen Soltau,et al.  The ISL evaluation system for Verbmobil-II , 2001, 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221).

[15]  Alex Waibel,et al.  Tracking Focus of Attention for Human-Robot Communication , 2001 .

[16]  J. Ruusuvuori,et al.  Looking means listening: coordinating displays of engagement in doctor-patient interaction. , 2001, Social science & medicine.

[17]  Alexander H. Waibel,et al.  Skin-Color Modeling and Adaptation , 1998, ACCV.

[18]  Anton Nijholt,et al.  Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.

[19]  Alexander H. Waibel,et al.  Estimating focus of attention based on gaze and sound , 2001, PUI '01.

[20]  Tetsunori Kobayashi,et al.  Multi-person conversation via multi-modal interface - a robot who communicate with multi-user - , 1999, EUROSPEECH.