As an early behavioral study of what non-verbal features a robot tourguide could use to analyze a crowd, personalize an interaction and maintain high levels of engagement, we analyze participant gaze statistics in response to a robot tour guide's deictic gestures. There were thirty-seven participants overall, split into nine groups of three to five people each. In groups with the lowest engagement levels aggregate gaze response to the robot's pointing gesture involved the fewest total glance shifts, least time spent looking at indicated object and no intra-participant gaze. Our diverse participants had overlapping engagement ratings within their group, and we found that a robot that tracks group rather than individual analytics could capture less noisy and often stronger trends relating gaze features to self-reported engagement scores. Thus we have found indications that aggregate group analysis captures more salient and accurate assessments of overall humans-robot interactions, even with lower resolution features.
[1]
Takayuki Kanda,et al.
A larger audience, please! — Encouraging people to listen to a guide robot
,
2010,
2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[2]
Takayuki Kanda,et al.
A larger audience, please!: encouraging people to listen to a guide robot
,
2010,
HRI 2010.
[3]
김문상,et al.
The Autonomous Tour-Guide Robot Jinny
,
2004
.
[4]
Illah R. Nourbakhsh,et al.
The mobot museum robot installations: a five year experiment
,
2003,
Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).