Footing in human-robot conversations: How robots might shape participant roles using gaze cues

During conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity)-or “footing” as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the participant roles of its conversational partners using these cues. We designed a set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer. We evaluated our design in a controlled laboratory experiment with 72 subjects in 36 trials. In three conditions, the robot signaled to two subjects, only by means of gaze, the roles of (1) two addressees, (2) an addressee and a bystander, or (3) an addressee and an overhearer. Behavioral measures showed that subjects' participation behavior conformed to the roles that the robot communicated to them. In subjective evaluations, significant differences were observed in feelings of groupness between addressees and others and liking between overhearers and others. Participation in the conversation did not affect task performance-measured by recall of information presented by the robot-but affected subjects' ratings of how much they attended to the task.

[1]  Herbert H. Clark,et al.  Coordinating beliefs in conversation , 1992 .

[2]  R. Bales,et al.  Channels of communication in small groups. , 1951 .

[3]  Mel Slater,et al.  The impact of eye gaze on communication using humanoid avatars , 2001, CHI.

[4]  T. Kanda,et al.  Robot mediated round table: Analysis of the effect of robot's gaze , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[5]  Tetsuo Ono,et al.  Body Movement Analysis of Human-Robot Interaction , 2003, IJCAI.

[6]  A. Kendon Some functions of gaze-direction in social interaction. , 1967, Acta psychologica.

[7]  K. Chang,et al.  Embodiment in conversational interfaces: Rea , 1999, CHI '99.

[8]  Tetsunori Kobayashi,et al.  Modeling of conversational strategy for the robot participating in the group conversation , 2001, INTERSPEECH.

[9]  E. Goffman On face-work; an analysis of ritual elements in social interaction. , 1955, Psychiatry.

[10]  Sven Behnke,et al.  Towards a humanoid museum guide robot that interacts with multiple persons , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[11]  M. Argyle,et al.  EYE-CONTACT, DISTANCE AND AFFILIATION. , 1965, Sociometry.

[12]  Yuichiro Yoshikawa,et al.  Responsive Robot Gaze to Interaction Partner , 2006, Robotics: Science and Systems.

[13]  Greg Urban Language and Communicative Practices , 1998 .

[14]  Candace L. Sidner,et al.  Where to look: a study of human-robot engagement , 2004, IUI '04.

[15]  Raj M. Ratwani,et al.  Integrating vision and audition within a cognitive architecture to track conversations , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[16]  Elisabeth André,et al.  Where Do They Look? Gaze Behaviors of Multiple Users Interacting with an Embodied Conversational Agent , 2005, IVA.

[17]  M. Turk,et al.  Transformed social interaction, augmented gaze, and social influence in immersive virtual environments , 2005 .

[18]  Roel Vertegaal,et al.  Effects of Gaze on Multiparty Mediated Communication , 2000, Graphics Interface.

[19]  E. Schegloff Overlapping talk and the organization of turn-taking for conversation , 2000, Language in Society.

[20]  E. Schegloff Sequencing in Conversational Openings , 1968 .

[21]  Anton Nijholt,et al.  Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.

[22]  W. Hanks Language & communicative practices , 1995 .

[23]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[24]  Hideaki Kuzuoka,et al.  Museum guide robot based on sociological interaction analysis , 2007, CHI.

[25]  H. H. Clark,et al.  Hearers and speech acts , 1982 .

[26]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[27]  Vanessa Evers,et al.  Relational vs. group self-construal: Untangling the role of national culture in HRI , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[28]  C. Goodwin Restarts, Pauses, and the Achievement of a State of Mutual Gaze at Turn‐Beginning , 1980 .

[29]  R. Bales,et al.  Personality and Interpersonal Behavior. , 1971 .

[30]  S. Drucker,et al.  The Role of Eye Gaze in Avatar Mediated Conversational Interfaces , 2000 .

[31]  C. Goodwin Conversational Organization: Interaction Between Speakers and Hearers , 1981 .

[32]  M. Argyle,et al.  Gaze, Mutual Gaze, and Proximity , 1972 .

[33]  E. Schegloff,et al.  A simplest systematics for the organization of turn-taking for conversation , 1974 .

[34]  Hideaki Kuzuoka,et al.  Precision timing in human-robot interaction: coordination of head movement and utterance , 2008, CHI.

[35]  W. L. Libby,et al.  Eye contact and direction of looking as stable individual differences. , 1970 .

[36]  D. Hymes Models of the Interaction of Language and Social Life , 2009 .

[37]  Stephen C. Levinson,et al.  Putting linguistics on a proper footing: Explorations in Goffman's concepts of participation. , 1988 .

[38]  J. S. Efran Looking for approval: effects on visual behavior of approbation from persons differing in importance. , 1968, Journal of personality and social psychology.

[39]  Dirk Heylen,et al.  CONTROLLING THE GAZE OF CONVERSATIONAL AGENTS , 2005 .

[40]  E. Goffman Behavior in Public Places , 1963 .

[41]  R. Exline Explorations in the process of person perception: visual interaction in relation to competition, sex, and need for affiliation , 1963 .

[42]  S. Duncan,et al.  Some Signals and Rules for Taking Speaking Turns in Conversations , 1972 .