To Beat or Not to Beat: Beat Gestures in Direction Giving

Research on gesture generation for embodied conversational agents (ECA’s) mostly focuses on gesture types such as pointing and iconic gestures, while ignoring another gesture type frequently used by human speakers: beat gestures. Analysis of a corpus of route descriptions showed that although annotators show very low agreement in applying a ‘beat filter’ aimed at identifying physical features of beat gestures, they are capable of reliably distinguishing beats from other gestures in a more intuitive manner. Beat gestures made up more than 30% of the gestures in our corpus, and they were sometimes used when expressing concepts for which other gesture types seemed a more obvious choice.Based on these findings we propose a simple, probabilistic model of beat production for ECA’s. However, it is clear that more research is needed to determine why direction givers in some cases use beats when other gestures seem more appropriate, and vice versa.

[1]  Catherine I. Watson,et al.  A profile of the discourse and intonational structures of route descriptions , 1999, EUROSPEECH.

[2]  C. Creider Hand and Mind: What Gestures Reveal about Thought , 1994 .

[3]  S. Goldin-Meadow,et al.  Why people gesture when they speak , 1998, Nature.

[4]  Stefan Kopp,et al.  Increasing the expressiveness of virtual agents: autonomous generation of speech and gesture for spatial description tasks , 2009, AAMAS.

[5]  Maurizio Mancini,et al.  Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[6]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[7]  Toyoaki Nishida,et al.  Enriching Agent Animations with Gestures and Highlighting Effects , 2004, IMTCI.

[8]  Mariët Theune,et al.  The virtual guide: a direction giving embodied conversational agent , 2007, INTERSPEECH.

[9]  Stefan Kopp,et al.  Trading Spaces: How Humans and Humanoids Use Speech and Gesture to Give Directions , 2007 .

[10]  S. Duncan,et al.  Some Signals and Rules for Taking Speaking Turns in Conversations , 1972 .

[11]  Hans-Peter Seidel,et al.  Annotated New Text Engine Animation Animation Lexicon Animation Gesture Profiles MR : . . . JL : . . . Gesture Generation Video Annotated Gesture Script , 2007 .

[12]  Barbara Di Eugenio,et al.  On the Usage of Kappa to Evaluate Agreement on Coding Tasks , 2000, LREC.

[13]  R. Krauss Why Do We Gesture When We Speak? , 1998 .

[14]  Patrick Olivier,et al.  LEXICLECS: A Real-world Architecture for the Synthesis of Spontaneous Gesture , 2006 .

[15]  Justine Cassell,et al.  Knowledge Representation for Generating Locating Gestures in Route Directions , 2009, Spatial Language and Dialogue.

[16]  Bobby Bodenheimer,et al.  Synthesis and evaluation of linear motion transitions , 2008, TOGS.

[17]  John D. Kelleher,et al.  Applying Computational Models of Spatial Prepositions to Visually Situated Dialog , 2009, CL.