Revealing Gauguin: engaging visitors in robot guide's explanation in an art museum

Designing technologies that support the explanation of museum exhibits is a challenging domain. In this paper we develop an innovative approach - providing a robot guide with resources to engage visitors in an interaction about an art exhibit. We draw upon ethnographical fieldwork in an art museum, focusing on how tour guides interrelate talk and visual conduct, specifically how they ask questions of different kinds to engage and involve visitors in lengthy explanations of an exhibit. From this analysis we have developed a robot guide that can coordinate its utterances and body movement to monitor the responses of visitors to these. Detailed analysis of the interaction between the robot and visitors in an art museum suggests that such simple devices derived from the study of human interaction might be useful in engaging visitors in explanations of complex artifacts.

[1]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[2]  Gene H. Lerner Selecting next speaker: The context-sensitive operation of a context-free organization , 2003, Language in Society.

[3]  Y. Matsumoto,et al.  An Occlusion Robust Likelihood Integration Method for Multi-Camera People Head Tracking , 2007, 2007 Fourth International Conference on Networked Sensing Systems.

[4]  Takayuki Kanda,et al.  Group attention control for communication robots with Wizard of OZ approach , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Hideaki Kuzuoka,et al.  Effects of Pauses and Restarts on Achieving a State of Mutual Orientation between a Human and a Robot , 2008 .

[6]  Wolfram Burgard,et al.  Experiences with two Deployed Interactive Tour-Guide Robots , 1999 .

[7]  Candace L. Sidner,et al.  Explorations in engagement for humans and robots , 2005, Artif. Intell..

[8]  E. Schegloff,et al.  A simplest systematics for the organization of turn-taking for conversation , 1974 .

[9]  Hideaki Kuzuoka,et al.  Precision timing in human-robot interaction: coordination of head movement and utterance , 2008, CHI.

[10]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[11]  Hideaki Kuzuoka,et al.  Effect of restarts and pauses on achieving a state of mutual orientation between a human and a robot , 2008, CSCW.

[12]  C. Goodwin Action and embodiment within situated human interaction , 2000 .

[13]  Sven Behnke,et al.  Towards a humanoid museum guide robot that interacts with multiple persons , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[14]  Michael Isard,et al.  CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.

[15]  Hideaki Kuzuoka,et al.  Museum guide robot based on sociological interaction analysis , 2007, CHI.

[16]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[17]  Tatsuya Nomura,et al.  Questionnaire-based social research on opinions of Japanese visitors for communication robots at an exhibition , 2006, AI & SOCIETY.