The production of co-speech iconic gestures: Empirical study and computational simulation with virtual agents

The use of speech-accompanying iconic gestures is a ubiquitous characteristic of human-human communication, especially when spatial information is expressed. At the starting point of this thesis, however, it was a widely open question why different gestures take the particular physical form they actually do. Accordingly, previous computational models simulating the use of gestures were of limited significance. The goal of this thesis was to develop a comprehensive computational simulation model for the production of co-speech iconic gestures to be realized in virtual agents. The rationale behind this objective was to devise and probe a predictive model of gesture use in order to gain insight into human gesture production, and thereby to improve human-agent interaction such that it progresses towards intuitive and human-like communication. As an empirical basis for the generation model, a corpus of natural speech and gesture use was statistically analyzed, revealing novel findings regarding the question when and how speakers use gestures. It was found that iconic gesture use is not only influenced by the shape of the object to depict, but also by other characteristics of the referent, by the linguistic and discourse-contextual situation, as well as a speaker's previous gestural behavior. Further, it could be shown that the role of gestural representation techniques (like placing or drawing) is decisive for the physical form of iconic gestures. And finally, the analysis revealed obvious inter-individual differences, both at the surface of gestural behavior and also in how strong particular influencing relations were. Based on these empirical insights, the Generation Network for Iconic Gestures (GNetIc) was developed – a computational simulation model for the production of speech-accompanying iconic gestures. It goes beyond previous systems in several respects. First, the model combines data-driven machine learning techniques and rule-based decision making to account for both inter-individual differences in gesture use, as well as patterns of form-meaning mappings specific to representation techniques. Second, the network accounts for the fact that the physical appearance of generated gestures is influenced by multiple factors: characteristic features of the referent accounting for iconicity, as well as contextual factors like the given communicative goal, information state, or previous gesture use. And third, learning gesture networks from individual speakers' data gives an easily interpretable visual image of preferences and strategies in composing gestures and makes them available to generate novel gesture forms in the style of the respective speaker. GNetIc models were brought to application in an overall architecture for integrated speech and gesture generation. Being equipped with proper knowledge sources, i.e., communicative plans, lexicon, grammar, propositional, and imagistic knowledge, a virtual agent was enabled to autonomously explain buildings of a virtual environment using speech and gestures. By switching between the respective decision networks, the system has the ability to simulate speaker-specific gesture use. Accounting for the two-fold rationale followed in this thesis, the GNetIc model was finally evaluated in two ways. First, in comparison with empirically observed gestural behavior, the model was shown to be able to successfully approximate human use of iconic gestures, especially when capturing the characteristics of individual speakers' gesture style. Second, when brought to application in a virtual agent, the generated gestural behavior was found to be positively rated by human recipients. In particular, individualized GNetIc-generated gestures could increase the perceived quality of object descriptions. Moreover, the virtual agent itself was rated more positively in terms of verbal capability, likeability, competence, and human-likeness. 
Accordingly the results of this work provide first steps towards a more thorough understanding of iconic gesture production in humans and also on how gesture use may improve human-agent interaction.

[1]  Brian Butterworth,et al.  Gesture and Silence as Indicators of Planning in Speech , 1978 .

[2]  Zeshu Shao,et al.  The Role of Synchrony and Ambiguity in Speech–Gesture Integration during Comprehension , 2011, Journal of Cognitive Neuroscience.

[3]  Ewald Lang,et al.  The Semantics of Dimensional Designation of Spatial Objects , 1989 .

[4]  Mary Ellen Foster,et al.  Avoiding Repetition in Generated Text , 2007, ENLG.

[5]  Stefan Kopp,et al.  Individualized Gesturing Outperforms Average Gesturing - Evaluating Gesture Production in Virtual Humans , 2010, IVA.

[6]  Stefan Kopp,et al.  Systematicity and Idiosyncrasy in Iconic Gesture Use: Empirical Analysis and Computational Modeling , 2009, Gesture Workshop.

[7]  Sotaro Kita,et al.  How representational gestures help speaking , 2000 .

[8]  W. Rogers,et al.  THE CONTRIBUTION OF KINESIC ILLUSTRATORS TOWARD THE COMPREHENSION OF VERBAL BEHAVIOR WITHIN UTTERANCES , 1978 .

[9]  Pierre Feyereisen,et al.  The Meaning of Gestures - What Can Be Understood Without Speech , 1988 .

[10]  De Ruiter,et al.  Gesture and speech production , 1998 .

[11]  Wilhelm Wundt,et al.  The Language of Gestures , 1973 .

[12]  Stefan Kopp,et al.  The Behavior Markup Language: Recent Developments and Challenges , 2007, IVA.

[13]  Stefan Kopp,et al.  MODELING THE PRODUCTION OF COVERBAL ICONIC GESTURES BY LEARNING BAYESIAN DECISION NETWORKS , 2010, Appl. Artif. Intell..

[14]  Stefan Kopp,et al.  Towards integrated microplanning of language and iconic gesture for multimodal output , 2004, ICMI '04.

[15]  Annette Herskovits Language and Spatial Cognition: An Interdisciplinary Study of the Prepositions in English , 2009 .

[16]  Janet Beavin Bavelas,et al.  Gesturing on the telephone: Independent effects of dialogue and visibility. , 2008 .

[17]  Finn V. Jensen,et al.  Bayesian Networks and Decision Graphs , 2001, Statistics for Engineering and Information Science.

[18]  Jacob Cohen A Coefficient of Agreement for Nominal Scales , 1960 .

[19]  Stefan Kopp,et al.  Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications , 2013, Journal on Multimodal User Interfaces.

[20]  Judith Holler,et al.  How iconic gestures and speech interact in the representation of meaning: Are both aspects really integral to the process? , 2003 .

[21]  A. Kendon Gesticulation and Speech: Two Aspects of the Process of Utterance , 1981 .

[22]  Ehud Reiter,et al.  Book Reviews: Building Natural Language Generation Systems , 2000, CL.

[23]  Cornelia Müller,et al.  Redebegleitende Gesten : Kulturgeschichte, Theorie, Sprachvergleich , 1998 .

[24]  Susan Duncan,et al.  Growth points in thinking-for-speaking , 1998 .

[25]  Daphne Koller,et al.  Ordering-Based Search: A Simple and Effective Algorithm for Learning Bayesian Networks , 2005, UAI.

[26]  S. Lauritzen The EM algorithm for graphical association models with missing data , 1995 .

[27]  Tomi Silander,et al.  Comparing Predictive Inference Methods for Discrete , 1997 .

[28]  Robert Dale,et al.  Speaker-Dependent Variation in Content Selection for Referring Expression Generation , 2010, ALTA.

[29]  William D. Hopkins,et al.  The effect of thought structure on the production of lexical movements , 2002, Brain and Language.

[30]  L A Thompson,et al.  Evaluation and integration of speech and pointing gestures during referential understanding. , 1986, Journal of experimental child psychology.

[31]  Gregory F. Cooper,et al.  A Bayesian Method for the Induction of Probabilistic Networks from Data , 1992 .

[32]  Stefan Kopp,et al.  Multimodal Content Representation for Speech and Gesture Production , 2008 .

[33]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[34]  Dirk Heylen,et al.  Experimenting with the Gaze of a Conversational Agent , 2002 .

[35]  D. McNeill Gesture and Thought , 2005 .

[36]  S. Kosslyn Seeing and imagining in the cerebral hemispheres: a computational approach. , 1987, Psychological review.

[37]  Yihsiu Chen,et al.  Language and Gesture: Lexical gestures and lexical access: a process model , 2000 .

[38]  G. Beattie,et al.  Do Iconic Hand Gestures Really Contribute to the Communication of Semantic Information in a Face-to-Face Context? , 2009 .

[39]  L. C. van der Gaag,et al.  Building probabilistic networks: Where do the numbers come from? - a guide to the literature , 2000 .

[40]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..

[41]  P. Spirtes,et al.  An Algorithm for Fast Recovery of Sparse Causal Graphs , 1991 .

[42]  Michael Kipp,et al.  Gesture generation by imitation: from human behavior to computer character animation , 2005 .

[43]  C. Creider Hand and Mind: What Gestures Reveal about Thought , 1994 .

[44]  Geoffrey Beattie,et al.  An experimental investigation of the role of different types of iconic gesture in communication: A semantic feature approach , 2003 .

[45]  Sotaro Kita,et al.  Cross-cultural variation of speech-accompanying gesture: A review , 2009, Speech Accompanying-Gesture.

[46]  Amy J. C. Cuddy,et al.  Universal dimensions of social cognition: warmth and competence , 2007, Trends in Cognitive Sciences.

[47]  Paul Thagard,et al.  Dynamic Imagery: A Computational Model of Motion and Visual Analogy , 2002 .

[48]  Adam Kendon,et al.  How gestures can become like words , 1988 .

[49]  Demetri Terzopoulos,et al.  A decision network framework for the behavioral animation of virtual humans , 2007, SCA '07.

[50]  Barbara Hayes-Roth,et al.  A Blackboard Architecture for Control , 1985, Artif. Intell..

[51]  J. York,et al.  Bayesian Graphical Models for Discrete Data , 1995 .

[52]  R. Krauss,et al.  Nonverbal Behavior and Nonverbal Communication: What do Conversational Hand Gestures Tell Us? , 1996 .

[53]  Martha W. Alibali,et al.  Gesture in Spatial Cognition: Expressing, Communicating, and Thinking About Spatial Information , 2005, Spatial Cogn. Comput..

[54]  Joaquín Abellán,et al.  Some Variations on the PC Algorithm , 2006, Probabilistic Graphical Models.

[55]  Sotaro Kita,et al.  The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking , 2003, Brain and Language.

[56]  Dafydd Gibbon,et al.  CoGesT: a Formal Transcription System for Conversational Gesture , 2004, LREC.

[57]  A. Paivio Mental Representations: A Dual Coding Approach , 1986 .

[58]  Asli Ozyurek,et al.  Speech-gesture relationship across languages and in second language learners. Implications for spatial thinking and speaking , 2002 .

[59]  Ron Artstein,et al.  Survey Article: Inter-Coder Agreement for Computational Linguistics , 2008, CL.

[60]  Allan Collins,et al.  A spreading-activation theory of semantic processing , 1975 .

[61]  Helmut Schmidt,et al.  Probabilistic part-of-speech tagging using decision trees , 1994 .

[62]  Susan Goldin Hearing gesture : how our hands help us think , 2003 .

[63]  Kristinn R. Thórisson,et al.  The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..

[64]  R. Krauss,et al.  Do conversational hand gestures communicate? , 1991, Journal of personality and social psychology.

[65]  Rebecca A. Webb Linguistic features of metaphoric gestures , 1997 .

[66]  Hedda Lausberg,et al.  Methods in Gesture Research: , 2009 .

[67]  C. Hartshorne,et al.  Collected Papers of Charles Sanders Peirce , 1935, Nature.

[68]  John R. Anderson A Spreading Activation Theory of Memory , 1988 .

[69]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[70]  Maurizio Mancini,et al.  Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[71]  Francis K. H. Quek,et al.  Gesture and Speech Multimodal Conversational Interaction , 2001 .

[72]  Kenneth Holmqvist,et al.  What speakers do and what addressees look at: visual attention to gestures in human interaction live and on video , 2006 .

[73]  J. Cassell,et al.  Intersubjectivity in human-agent interaction , 2007 .

[74]  Matthew Stone Lexicalized Grammar 101 , 2002, ACL 2002.

[75]  Stacy Marsella,et al.  Nonverbal Behavior Generator for Embodied Conversational Agents , 2006, IVA.

[76]  Beth Levy,et al.  Conceptual Representations in Lan-guage Activity and Gesture , 1980 .

[77]  M. Halliday NOTES ON TRANSITIVITY AND THEME IN ENGLISH. PART 2 , 1967 .

[78]  Mark Steedman,et al.  Discourse and Information Structure , 2003, J. Log. Lang. Inf..

[79]  Catherine Pelachaud,et al.  Subtleties of facial expressions in embodied agents , 2002, Comput. Animat. Virtual Worlds.

[80]  Marianne Gullberg,et al.  Language-specific encoding of placement events in gestures , 2010 .

[81]  Jean-Claude Martin,et al.  The effects of speech-gesture cooperation in animated agents' behavior in multimedia presentations , 2007, Interact. Comput..

[82]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[83]  M. Denis The description of routes : A cognitive approach to the production of spatial discourse , 1997 .

[84]  A. Cohen,et al.  Intentionality in the use of hand illustrators in face-to-face communication situations. , 1973 .

[85]  De Ruiter,et al.  Some multimodal signals in humans , 2007 .

[86]  Judith Holler,et al.  A micro-analytic investigation of how iconic gestures and speech represent core semantic features in talk , 2002 .

[87]  Sotaro Kita,et al.  What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking , 2003 .

[88]  Ellen F. Prince,et al.  Toward a taxonomy of given-new information , 1981 .

[89]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[90]  Justine Cassell,et al.  Semantic and Discourse Information for Text-to-Speech Intonation , 1997, Workshop On Concept To Speech Generation Systems.

[91]  R. Krauss,et al.  The Communicative Value of Conversational Hand Gesture , 1995 .

[92]  A. Garnham,et al.  The role of conversational hand gestures in a narrative task , 2007 .

[93]  Sotaro Kita,et al.  Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders , 1997, Gesture Workshop.

[94]  D. Kieras Beyond pictures and words: Alternative information-processing models for imagery effect in verbal memory , 1978 .

[95]  Stefan Kopp,et al.  GNetIc - Using Bayesian Decision Networks for Iconic Gesture Generation , 2009, IVA.

[96]  Frank Wittig Maschinelles Lernen Bayes'scher Netze für benutzeradaptive Systeme , 2003, DISKI.

[97]  S. Kelly,et al.  Neural correlates of bimodal speech and gesture comprehension , 2004, Brain and Language.

[98]  Justine Cassell,et al.  Knowledge Representation for Generating Locating Gestures in Route Directions , 2009, Spatial Language and Dialogue.

[99]  Stanley Feldstein,et al.  On Gesture: Its Complementary Relationship With Speech , 2014 .

[100]  Hung-Hsuan Huang,et al.  From observation to simulation: generating culture-specific behavior for interactive systems , 2009, AI & SOCIETY.

[101]  Stefanie Dipper,et al.  Annotation of Information Structure: an Evaluation across different Types of Texts , 2008, LREC.

[102]  Judith Holler,et al.  The interaction of iconic gesture and speech , 2004 .

[103]  Timo Sowa Understanding coverbal iconic gestures in shape descriptions , 2006 .

[104]  J. D. Ruiter The production of gesture and speech , 2000 .

[105]  N. Goodman,et al.  Languages of art : an approach to a theory of symbols , 1979 .

[106]  Timothy Marsh,et al.  Shape your imagination: iconic gestural-based interaction , 1998, Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180).

[107]  Maurizio Mancini,et al.  Generating distinctive behavior for Embodied Conversational Agents , 2009, Journal on Multimodal User Interfaces.

[108]  W. Levelt,et al.  Speaking: From Intention to Articulation , 1990 .

[109]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[110]  Stefan Kopp,et al.  Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors , 2010, Speech Commun..

[111]  Robert Dale,et al.  Referring Expression Generation through Attribute-Based Heuristics , 2009, ENLG.

[112]  Jana Bressem,et al.  Notating gestures - Proposal for a form based notation system of coverbal gestures , 1998 .

[113]  Alexander Mehler,et al.  The Ariadne System: A Flexible and Extensible Framework for the Modeling and Storage of Experimental Data in the Humanities , 2010, LREC.

[114]  A. Burstein,et al.  Ideational gestures and speech in brain-damaged subjects , 1998 .

[115]  Stefan Kopp,et al.  Verbal or Visual? How Information is Distributed across Speech and Gesture in Spatial Dialog , 2006 .

[116]  Sotaro Kita,et al.  How does linguistic framing influence co-speech gestures? Insights from crosslinguistic differences and similarities , 2007 .

[117]  Nicholas O. Jungheim GESTURE AS A COMMUNICATION STRATEGY IN SECOND LANGUAGE DISCOURSE: A STUDY OF LEARNERS OF FRENCH AND SWEDISH.Marianne Gullberg. Lund, Sweden: Lund University Press, 1998. Pp. 253. , 2000, Studies in Second Language Acquisition.

[118]  Stefan Kopp,et al.  Multimodal Communication from Multimodal Thinking - towards an Integrated Model of Speech and Gesture Production , 2008, Int. J. Semantic Comput..

[119]  Hans-Peter Seidel,et al.  Annotated New Text Engine Animation Animation Lexicon Animation Gesture Profiles MR : . . . JL : . . . Gesture Generation Video Annotated Gesture Script , 2007 .

[120]  Sotaro Kita,et al.  Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production , 2007 .

[121]  Irene Kimbara Gesture Form Convergence in Joint Description , 2008 .

[122]  Hao Yan,et al.  Coordination and context-dependence in the generation of embodied conversation , 2000, INLG.

[123]  David Jensen,et al.  Learning the structure of bayesian networks with constraint satisfaction , 2010 .

[124]  J. Breese,et al.  Emotion and personality in a conversational agent , 2001 .

[125]  K. Gwet Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters , 2014 .

[126]  F. D. Saussure Cours de linguistique générale , 1924 .

[127]  Evelyn McClave,et al.  Gestural beats: The rhythm hypothesis , 1994 .

[128]  S. Levinson Frames of reference and Molyneux's question: Cross-linguistic evidence , 1996 .

[129]  Janet Beavin Bavelas,et al.  An experimental study of when and how speakers use gestures to communicate , 2002 .

[130]  J. Cassell,et al.  More Than Just Another Pretty Face: Embodied Conversational Interface Agents , 1999 .

[131]  Mark Steedman,et al.  APML, a Markup Language for Believable Behavior Generation , 2004, Life-like characters.

[132]  Anders L. Madsen,et al.  Hugin - The Tool for Bayesian Networks and Influence Diagrams , 2002, Probabilistic Graphical Models.

[133]  Sotaro Kita,et al.  Co-speech gestures do not originate from speech production processes: Evidence from the relationship between co-thought and co-speech gestures , 2009 .

[134]  R. Krauss,et al.  PSYCHOLOGICAL SCIENCE Research Article GESTURE, SPEECH, AND LEXICAL ACCESS: The Role of Lexical Movements in Speech Production , 2022 .

[135]  K. Tuite The production of gesture , 1993 .

[136]  Rieks op den Akker,et al.  Natural interaction with a virtual guide in a virtual environment , 2010, Journal on Multimodal User Interfaces.

[137]  W. Stokoe,et al.  Semiotics and Human Sign Languages , 1972 .

[138]  GEOFFREY BEATTIE,et al.  Do iconic hand gestures really contribute anything to the semantic information conveyed by speech? An experimental investigation , 1999 .

[139]  G. Schwarz Estimating the Dimension of a Model , 1978 .

[140]  I. Biederman Recognition-by-components: a theory of human image understanding. , 1987, Psychological review.

[141]  Murdock,et al.  The serial position effect of free recall , 1962 .

[142]  S. Goldin-Meadow,et al.  Why people gesture when they speak , 1998, Nature.

[143]  Stefan Kopp,et al.  Increasing the expressiveness of virtual agents: autonomous generation of speech and gesture for spatial description tasks , 2009, AAMAS.

[144]  J. Bortz Statistik für Human- und Sozialwissenschaftler , 2010 .

[145]  C. Nass,et al.  Truth is beauty: researching embodied conversational agents , 2001 .

[146]  C. Pelachaud,et al.  GRETA. A BELIEVABLE EMBODIED CONVERSATIONAL AGENT , 2005 .

[147]  D. Kimura,et al.  Manual activity during speaking. II. Left-handers. , 1973, Neuropsychologia.

[148]  Matthew Stone,et al.  Microplanning with Communicative Intentions: The SPUD System , 2001, Comput. Intell..

[149]  Heather Shovelton,et al.  Mapping the Range of Information Contained in the Iconic Hand Gestures that Accompany Spontaneous Speech , 1999 .

[150]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[151]  Martha W. Alibali,et al.  Raise your hand if you’re spatial: Relations between verbal and spatial skills and gesture production , 2007 .

[152]  Stefan Kopp,et al.  Trading Spaces: How Humans and Humanoids Use Speech and Gesture to Give Directions , 2007 .

[153]  Jean Carletta,et al.  Assessing Agreement on Classification Tasks: The Kappa Statistic , 1996, CL.

[154]  Stefan Kopp,et al.  Media Equation Revisited: Do Users Show Polite Reactions towards an Embodied Agent? , 2009, IVA.

[155]  Stefan Kopp,et al.  MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents , 2002 .

[156]  Constantin F. Aliferis,et al.  The max-min hill-climbing Bayesian network structure learning algorithm , 2006, Machine Learning.

[157]  Nicole C. Krämer,et al.  Effects of Embodied Interface Agents and Their Gestural Activity , 2003, IVA.

[158]  D. McNeill So you think gestures are nonverbal , 1985 .

[159]  andy. luecking,et al.  Assessing Reliability on Annotations (1): Theoretical Considerations , 2005 .

[160]  Stefan Kopp,et al.  Synthese und Koordination von Sprache und Gestik für virtuelle multimodale Agenten , 2003, DISKI.

[161]  Judith Holler,et al.  Communicating common ground: How mutually shared knowledge influences speech and gesture in a narrative task , 2009, Speech Accompanying-Gesture.

[162]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[163]  J. Cassell,et al.  SOCIAL DIALOGUE WITH EMBODIED CONVERSATIONAL AGENTS , 2005 .

[164]  D. Marr,et al.  Representation and recognition of the spatial organization of three-dimensional shapes , 1978, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[165]  Adam Kendon Chapter 9 – Some Relationships Between Body Motion and Speech: An Analysis of an Example1 , 1972 .

[166]  Janice I. Glasgow,et al.  THE IMAGERY DEBATE REVISITED: A COMPUTATIONAL PERSPECTIVE , 1993 .

[167]  S. Goldin-Meadow,et al.  Pointing Toward Two-Word Speech in Young Children , 2003 .

[168]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[169]  Yang Gao Automatic extraction of spatial location for gesture generation , 2002 .

[170]  Zsófia Ruttkay,et al.  Presenting in Style by Virtual Humans , 2007, COST 2102 Workshop.

[171]  G. Beattie,et al.  What properties of talk are associated with the generation of spontaneous iconic hand gestures? , 2002, The British journal of social psychology.

[172]  Richard Scheines,et al.  Causation, Prediction, and Search, Second Edition , 2000, Adaptive computation and machine learning.

[173]  Stuart J. Russell,et al.  Dynamic bayesian networks: representation, inference and learning , 2002 .

[174]  P. Ekman,et al.  The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding , 1969 .

[175]  J. Holler,et al.  The Effect of Common Ground on How Speakers Use Gesture and Speech to Represent Size Information , 2007 .

[176]  Ipke Wachsmuth,et al.  A Computational Model for the Representation and Processing of Shape in Coverbal Iconic Gestures 1 , 2009, Spatial Language and Dialogue.

[177]  D. McNeill,et al.  Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information , 1998 .

[178]  H. H. Clark,et al.  What's new? Acquiring New information as a process in comprehension , 1974 .

[179]  A. Jameson Adaptive interfaces and agents , 2002 .

[180]  Irene Kimbara On gestural mimicry , 2006 .

[181]  Jon Oberlander,et al.  Corpus-based generation of head and eyebrow motion for an embodied conversational agent , 2007, Lang. Resour. Evaluation.

[182]  Akiba A. Cohen,et al.  The Communicative Functions of Hand I1lustrators , 1977 .

[183]  Autumn B. Hostetter,et al.  Visible embodiment: Gestures as simulated action , 2008, Psychonomic bulletin & review.