Individualized Gesturing Outperforms Average Gesturing - Evaluating Gesture Production in Virtual Humans

How does a virtual agent's gesturing behavior influence the user's perception of communication quality and the agent's personality? This question was investigated in an evaluation study of co-verbal iconic gestures produced with the Bayesian network-based production model GNetIc. A network learned from a corpus of several speakers was compared with networks learned from individual speaker data, as well as two control conditions. Results showed that automatically GNetIc-generated gestures increased the perceived quality of an object description given by a virtual human. Moreover, gesturing behavior generated with individual speaker networks was rated more positively in terms of likeability, competence and human-likeness.

[1]  Kenny R. Coventry,et al.  Spatial Language and Dialogue , 2009, Explorations in language and space.

[2]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents: Research Articles , 2004 .

[3]  Jürgen Streeck,et al.  Depicting by gesture , 2008 .

[4]  Jean-Claude Martin,et al.  The effects of speech-gesture cooperation in animated agents' behavior in multimedia presentations , 2007, Interact. Comput..

[5]  Dirk Heylen,et al.  Experimenting with the Gaze of a Conversational Agent , 2002 .

[6]  MATT HUENERFAUTH Spatial, Temporal, and Semantic Models for American Sign Language Generation: Implications for Gesture Generation , 2008, Int. J. Semantic Comput..

[7]  Stefan Kopp,et al.  Increasing the expressiveness of virtual agents: autonomous generation of speech and gesture for spatial description tasks , 2009, AAMAS.

[8]  C. Nass,et al.  Truth is beauty: researching embodied conversational agents , 2001 .

[9]  Stefan Kopp,et al.  MODELING THE PRODUCTION OF COVERBAL ICONIC GESTURES BY LEARNING BAYESIAN DECISION NETWORKS , 2010, Appl. Artif. Intell..

[10]  Amy J. C. Cuddy,et al.  Universal dimensions of social cognition: warmth and competence , 2007, Trends in Cognitive Sciences.

[11]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[12]  Hans-Peter Seidel,et al.  Annotated New Text Engine Animation Animation Lexicon Animation Gesture Profiles MR : . . . JL : . . . Gesture Generation Video Annotated Gesture Script , 2007 .

[13]  Anna Esposito Verbal and Nonverbal Communication Behaviours, COST Action 2102 International Workshop, Vietri sul Mare, Italy, March 29-31, 2007, Revised Selected and Invited Papers , 2007, COST 2102 Workshop.

[14]  Danilo P. Mandic,et al.  Engineering Approaches to Conversational Informatics , 2008 .

[15]  Nicolas Courty,et al.  Gesture in Human-Computer Interaction and Simulation , 2006 .

[16]  Helge J. Ritter Cognitive Interaction Technology , 2010, KI - Künstliche Intelligenz.

[17]  Stefan Kopp,et al.  GNetIc - Using Bayesian Decision Networks for Iconic Gesture Generation , 2009, IVA.

[18]  Kristinn R. Thórisson,et al.  The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..

[19]  Kenneth Holmqvist,et al.  What speakers do and what addressees look at: visual attention to gestures in human interaction live and on video , 2006 .

[20]  Alastair J. Gill,et al.  Individual differences and implicit language: personality, parts-of-speech and pervasiveness , 2004 .

[21]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[22]  Stefan Kopp,et al.  Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors , 2010, Speech Commun..

[23]  Cornelia Müller,et al.  Redebegleitende Gesten : Kulturgeschichte, Theorie, Sprachvergleich , 1998 .

[24]  Bobby Bodenheimer,et al.  Synthesis and evaluation of linear motion transitions , 2008, TOGS.

[25]  Robert Dale,et al.  Referring Expression Generation through Attribute-Based Heuristics , 2009, ENLG.

[26]  Jon Oberlander,et al.  Corpus-based generation of head and eyebrow motion for an embodied conversational agent , 2007, Lang. Resour. Evaluation.

[27]  Ipke Wachsmuth,et al.  Alignment in Communication , 2008, Künstliche Intell..

[28]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[29]  Martha W. Alibali,et al.  Raise your hand if you’re spatial: Relations between verbal and spatial skills and gesture production , 2007 .

[30]  Hao Yan,et al.  Coordination and context-dependence in the generation of embodied conversation , 2000, INLG.

[31]  Stefan Kopp,et al.  Trading Spaces: How Humans and Humanoids Use Speech and Gesture to Give Directions , 2007 .

[32]  Stefan Kopp,et al.  Media Equation Revisited: Do Users Show Polite Reactions towards an Embodied Agent? , 2009, IVA.

[33]  Nicole C. Krämer,et al.  Effects of Embodied Interface Agents and Their Gestural Activity , 2003, IVA.

[34]  J. Cassell,et al.  Social Dialongue with Embodied Conversational Agents , 2005 .

[35]  J. Cassell,et al.  SOCIAL DIALOGUE WITH EMBODIED CONVERSATIONAL AGENTS , 2005 .

[36]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[37]  Zsófia Ruttkay,et al.  Presenting in Style by Virtual Humans , 2007, COST 2102 Workshop.

[38]  Ipke Wachsmuth,et al.  A Computational Model for the Representation and Processing of Shape in Coverbal Iconic Gestures 1 , 2009, Spatial Language and Dialogue.

[39]  Hannes Rieser,et al.  On Factoring Out a Gesture Typology from the Bielefeld Speech-and-Gesture-Alignment Corpus (SAGA) , 2009, Gesture Workshop.

[40]  J. V. Kuppevelt,et al.  Advances in natural multimodal dialogue systems , 2005 .