Recognizing the Visual Focus of Attention for Human Robot Interaction

We address the recognition of people's visual focus of attention (VFOA), the discrete version of gaze that indicates who is looking at whom or what. As a good indicator of addressee-hood (who speaks to whom, and in particular is a person speaking to the robot) and of people's interest, VFOA is an important cue for supporting dialog modelling in Human-Robot interactions involving multiple persons. In absence of high definition images, we rely on people's head pose to recognize the VFOA. Rather than assuming a fixed mapping between head pose directions and gaze target directions, we investigate models that perform a dynamic (temporal) mapping implicitly accounting for varying body/shoulder orientations of a person over time, as well as unsupervised adaptation. Evaluated on a public dataset and on data recorded with the humanoid robot Nao, the method exhibit better adaptivity and versatility producing equal or better performance than a state-of-the-art approach, while the proposed unsupervised adaptation does not improve results.

[1]  Trevor Darrell,et al.  Conditional Sequence Model for Context-Based Recognition of Gaze Aversion , 2007, MLMI.

[2]  Eric Horvitz,et al.  Open-World Dialog: Challenges, Directions, and a Prototype , 2009 .

[3]  Rainer Stiefelhagen,et al.  Tracking focus of attention in meetings , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[4]  D. Sparks,et al.  Eye-head coordination during head-unrestrained gaze shifts in rhesus monkeys. , 1997, Journal of neurophysiology.

[5]  Rainer Stiefelhagen,et al.  Deducing the visual focus of attention from head pose estimation in dynamic multi-view meeting scenarios , 2008, ICMI '08.

[6]  Candace L. Sidner,et al.  Engagement rules for human-robot collaborative interactions , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[7]  Jean-Marc Odobez,et al.  Evaluation of Multiple Cue Head Pose Estimation Algorithms in Natural Environements , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[8]  Andrei Popescu-Belis,et al.  Machine Learning for Multimodal Interaction , 4th International Workshop, MLMI 2007, Brno, Czech Republic, June 28-30, 2007, Revised Selected Papers , 2008, MLMI.

[9]  Jean-Marc Odobez,et al.  Multiperson Visual Focus of Attention from Head Pose and Meeting Contextual Cues , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  D. Ballard,et al.  Eye movements in natural behavior , 2005, Trends in Cognitive Sciences.

[11]  Albert Ali Salah,et al.  Resolution of focus of attention using gaze direction estimation and saliency computation , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[12]  Jean-Marc Odobez,et al.  Probabilistic Head Pose Tracking Evaluation in Single and Multiple Camera Setups , 2007, CLEAR.

[13]  Alois Knoll,et al.  Modelling State of Interaction from Head Poses for Social Human-Robot Interaction , 2012, HRI 2012.

[14]  Junji Yamato,et al.  A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances , 2005, ICMI '05.

[15]  Marek P. Michalowski,et al.  A spatial model of engagement for a social robot , 2006, 9th IEEE International Workshop on Advanced Motion Control, 2006..

[16]  Candace L. Sidner,et al.  Explorations in engagement for humans and robots , 2005, Artif. Intell..

[17]  Eric Horvitz,et al.  Models for Multiparty Engagement in Open-World Dialog , 2009, SIGDIAL Conference.

[18]  Jean-Marc Odobez,et al.  Recognizing Visual Focus of Attention From Head Pose in Natural Meetings , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[19]  Jonathan G. Fiscus,et al.  Multimodal Technologies for Perception of Humans, International Evaluation Workshops CLEAR 2007 and RT 2007, Baltimore, MD, USA, May 8-11, 2007, Revised Selected Papers , 2008, CLEAR.

[20]  Jeff B. Pelz,et al.  Building a lightweight eyetracking headgear , 2004, ETRA.

[21]  Gin McCollum,et al.  Variables Contributing to the Coordination of Rapid Eye/Head Gaze Shifts , 2006, Biological Cybernetics.

[22]  V. Bruce,et al.  Do the eyes have it? Cues to the direction of social attention , 2000, Trends in Cognitive Sciences.