“Can you answer questions, Flobi?”: Interactionally defining a robot's competence as a fitness instructor

Users draw on four sources to judge a robot's competence: (1) the robot's voice, (2) physical appearance of and (3) the interaction experience with the robot but also (4) the relationship between the robot's physical appearance and its conduct. Furthermore, most approaches in social robotics have an outcome-oriented focus and thus use questionnaires to measure a global evaluation of the robot after interaction took place. The present research takes a process-oriented approach to explore the factors relevant in the formation of users' attitudes toward the robot. To do so, an ethnographic approach (Conversation Analysis) was employed to analyze the micro-coordination between user and robot. We report initial findings from a study in which a robot took the role of a fitness instructor. Our results emphasize that the participant judges step-by-step the robot's capabilities and differentiates its competence on two levels regarding to the robot's role: a robot as a (1) social/interactional co-participant and as a (2) fitness instructor.

[1]  Friederike Eyssel,et al.  ‘If you sound like me, you must be more human’: On the interplay of robot and user features on human-robot acceptance and anthropomorphism , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[2]  L. Mondada Pour une linguistique interactionnelle , 1994 .

[3]  E. Schegloff Structures of Social Action: On some gestures' relation to talk , 1985 .

[4]  Eve Sweetser Looking at space to study mental spaces: Co-speech gesture as a crucial data source in cognitive linguistics , 2007 .

[5]  Ingo Lütkebohle,et al.  The bielefeld anthropomorphic robot head “Flobi” , 2010, 2010 IEEE International Conference on Robotics and Automation.

[6]  K. Rohlfing,et al.  On the loop of action modification and the recipient's gaze in adult-child interaction , 2009 .

[7]  Emanuel A. Schegloff Perpetual Contact: Opening sequencing , 2002 .

[8]  Jürgen Streeck,et al.  Gesturecraft: The manu-facture of meaning , 2009 .

[9]  D. McNeill Hand and Mind , 1995 .

[10]  Harvey Sacks,et al.  Lectures on Conversation , 1995 .

[11]  Friederike Eyssel,et al.  Effects of anticipated human-robot interaction and predictability of robot behavior on perceptions of anthropomorphism , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  K. Dautenhahn,et al.  The Negative Attitudes Towards Robots Scale and reactions to robot behaviour in a live Human-Robot Interaction study , 2009 .

[13]  Emanuel A. Schegloff,et al.  A tutorial on membership categorization , 2007 .

[14]  Richard Gwyn,et al.  The Semiotic Body in its Environment , 2001 .

[15]  Ben J. A. Krse,et al.  Studying the acceptance of a robotic agent by elderly users , 2006 .

[16]  Emanuel A. Schegloff Opening sequencing , 2002 .

[17]  Karola Pitsch,et al.  How infants perceive the toy robot Pleo. An exploratory case study on infant-robot-interaction , 2010, HRI 2010.

[18]  Adam Kendon,et al.  Gesticulation, Quotable Gestures, and Signs , 1990 .

[19]  Katharina J. Rohlfing,et al.  Better be reactive at the beginning. Implications of the first seconds of an encounter for the tutoring style in human-robot-interaction , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[20]  Hideaki Kuzuoka,et al.  “The first five seconds”: Contingent stepwise entry into an interaction as a means to secure sustained engagement in HRI , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[21]  Britta Wrede,et al.  Effects of visual appearance on the attribution of applications in social robotics , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.