Estimating human interest and attention via gaze analysis

In this paper we analyze joint attention between a robot that presents features of its surroundings and its human audience. In a statistical analysis of hand-coded video data, we find that the robot's physical indications lead to a greater attentional coherence between robot and humans than do its verbal indications.We also find that aspects of how the tour group participants look at robot-indicated objects, including when they look and how long they look, can provide statistically significant correlations with their self-reported engagement scores of the presentations. Higher engagement would suggest a greater degree of interest in, and attention to, the material presented. These findings will seed future gaze tracking systems that will enable robots to estimate listeners' state. By tracking audience gaze, our goal is to enable robots to cater the type of content and manner of its presentation to the preferences or educational goals of a particular crowd, e.g. in a tour guide, classroom or entertainment setting.

[1]  Y. Kuno,et al.  Museum guide robot with effective head gestures , 2007, 2007 International Conference on Control, Automation and Systems.

[2]  김문상,et al.  The Autonomous Tour-Guide Robot Jinny , 2004 .

[3]  Candace L. Sidner,et al.  Explorations in engagement for humans and robots , 2005, Artif. Intell..

[4]  Takashi Shibata,et al.  Choosing answerers by observing gaze responses for museum guide robots , 2010, HRI 2010.

[5]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[6]  Louis-Philippe Morency,et al.  The effect of head-nod recognition in human-robot conversation , 2006, HRI '06.

[7]  Manja Lohse,et al.  The function of off-gaze in human-robot interaction , 2011, 2011 RO-MAN.

[8]  Matthew W. Crocker,et al.  Visual attention in spoken human-robot interaction , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Eric Horvitz,et al.  Facilitating multiparty dialog with gaze, gesture, and speech , 2010, ICMI-MLMI '10.

[10]  Sven Behnke,et al.  Towards a humanoid museum guide robot that interacts with multiple persons , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[11]  S. Brennan,et al.  Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation , 2007 .

[12]  Katsushi Ikeuchi,et al.  Flexible cooperation between human and robot by interpreting human intention from gaze information , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[13]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[14]  Takayuki Kanda,et al.  A larger audience, please!: encouraging people to listen to a guide robot , 2010, HRI 2010.

[15]  Takayuki Kanda,et al.  A larger audience, please! — Encouraging people to listen to a guide robot , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[16]  Wolfram Burgard,et al.  MINERVA: a second-generation museum tour-guide robot , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[17]  Takayuki Kanda,et al.  An affective guide robot in a shopping mall , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[18]  F. Kaplan,et al.  The challenges of joint attention , 2006 .

[19]  Illah R. Nourbakhsh,et al.  The mobot museum robot installations: a five year experiment , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[20]  Andrea Lockerd Thomaz,et al.  Effects of responding to, initiating and ensuring joint attention in human-robot interaction , 2011, 2011 RO-MAN.

[21]  C. Moore,et al.  Joint attention : its origins and role in development , 1995 .

[22]  Yoshinori Kobayashi,et al.  Revealing Gauguin: engaging visitors in robot guide's explanation in an art museum , 2009, CHI.