Generating Embodied Information Presentations

The output modalities available for information presentation by embodied, human-like agents include both language and various nonverbal cues such as pointing and gesturing. These human, nonverbal modalities can be used to emphasize, extend or even replace the language output produced by the agent. To deal with the interdependence between language and nonverbal signals, their production processes should be integrated. In this chapter, we discuss the issues involved in extending a natural language generation system with the generation of nonverbal signals. We sketch a general architecture for embodied language generation, discussing the interaction between the production of nonverbal signals and language generation, and the different factors influencing the choice between the available modalities. As an example we describe the generation of route descriptions by an embodied agent in a 3D environment.

[1]  Anton Nijholt,et al.  Maps, agents and dialogue for exploring a virtual world , 2001 .

[2]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[3]  Andreas Butz,et al.  Incorporating a Virtual Presenter in a Resource Adaptive Navigational Help System , 2000 .

[4]  Stefan Kopp,et al.  A Communicative Mediator in a Virtual Environment: Processing of Multimodal Input and Output , 2001 .

[5]  Hao Yan,et al.  Coordination and context-dependence in the generation of embodied conversation , 2000, INLG.

[6]  Shimei Pan,et al.  Prosody modelling in concept-to-speech generation: methodological issues , 2000, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[7]  R. Kraut,et al.  Social and emotional messages of smiling: An ethological approach. , 1979 .

[8]  Hao Yan,et al.  Paired speech and gesture generation in embodied conversational agents , 2000 .

[9]  Mark Steedman,et al.  APML, a Markup Language for Believable Behavior Generation , 2004, Life-like characters.

[10]  Hao Yan Paired Speech and Gesture Generation in Embodied Conversational Agents , 2000 .

[11]  Daniel S. Paiva,et al.  In search of a reference architecture for NLG systemsLynne , 1999 .

[12]  Daniel Thalmann,et al.  Two Approaches to Scripting Character Animation , 2002 .

[13]  Wim Claassen,et al.  Generating Referring Expressions in a Multimodal Environment , 1992, NLG.

[14]  Mariët Theune,et al.  Contrast in concept-to-speech generation , 2002, Comput. Speech Lang..

[15]  Floriana Grasso,et al.  Affective Natural Language Generation , 1999, IWAI.

[16]  Catherine I. Watson,et al.  A profile of the discourse and intonational structures of route descriptions , 1999, EUROSPEECH.

[17]  Robert Dale,et al.  Building applied natural language generation systems , 1997, Natural Language Engineering.

[18]  A. Leroi‐Gourhan,et al.  Gesture and Speech , 1993 .

[19]  Eduard Hovy,et al.  Generating Natural Language Under Pragmatic Constraints , 1988 .

[20]  Catherine Pelachaud,et al.  Eye Communication in a Conversational 3D Synthetic Agent , 2000, AI Commun..

[21]  B. Depaulo,et al.  Telling lies. , 1979, Journal of personality and social psychology.

[22]  M. Studdert-Kennedy Hand and Mind: What Gestures Reveal About Thought. , 1994 .

[23]  Cynthia Breazeal,et al.  Designing sociable robots , 2002 .

[24]  Mary Ritchie Key,et al.  The Relationship of Verbal and Nonverbal Communication , 1980 .

[25]  Catherine Pelachaud,et al.  Embodied contextual agent in information delivering application , 2002, AAMAS '02.

[26]  Stefan Kopp,et al.  MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents , 2002 .

[27]  Yukiko I. Nakano,et al.  Non-Verbal Cues for Discourse Structure , 2022 .

[28]  Anton Nijholt,et al.  Natural Language Navigation Support in Virtual Reality , 2001 .

[29]  Brigitte Krenn,et al.  Generation of multimodal dialogue for net environments , 2002 .

[30]  Susanne van Mulken,et al.  The impact of animated interface agents: a review of empirical research , 2000, Int. J. Hum. Comput. Stud..

[31]  Walburga Von Raffler-Engel,et al.  Aspects of nonverbal communication , 1980 .

[32]  Mark Steedman,et al.  Information Structure and the Syntax-Phonology Interface , 2000, Linguistic Inquiry.

[33]  Robert Dale,et al.  Building Natural Language Generation Systems: Figures , 2000 .

[34]  Emiel Krahmer,et al.  A new model for the generation of multimodal referring expressions , 2003 .

[35]  W NEEDLES,et al.  Gesticulation and speech. , 1959, The International journal of psycho-analysis.

[36]  Thomas Rist,et al.  Presenting through performing: on the use of multiple lifelike characters in knowledge-based presentation systems , 2000, IUI '00.

[37]  Emiel Krahmer,et al.  From data to speech: a general approach , 2001, Natural Language Engineering.

[38]  Norbert Reithinger,et al.  The Performance of an Incremental Generation Component for Multi-Modal Dialog Contributions , 1992, NLG.

[39]  Dirk Heylen,et al.  CONTROLLING THE GAZE OF CONVERSATIONAL AGENTS , 2005 .

[40]  Justine Cassell,et al.  Human conversation as a system framework: designing embodied conversational agents , 2001 .

[41]  Scott Prevost,et al.  A semantics of contrast and information structure for specifying intonation in spoken language generation , 1996 .

[42]  Alex Lascarides,et al.  Abducing Temporal Discourse , 1992, NLG.

[43]  R. Krauss,et al.  Do conversational hand gestures communicate? , 1991, Journal of personality and social psychology.

[44]  Nicole Chovil Discourse‐oriented facial displays in conversation , 1991 .

[45]  Dirk Heylen,et al.  Multimodal Communication in Inhabited Virtual Environments , 2002, Int. J. Speech Technol..

[46]  A. Kendon Do Gestures Communicate? A Review , 1994 .

[47]  Wolfgang Hoeppner,et al.  Review of Generating natural language under pragmantic constraints by Edward H. Hovy. Lawrence Erlbaum Associates 1988. , 1990 .

[48]  Mark Steedman,et al.  Generating Facial Expressions for Speech , 1996, Cogn. Sci..

[49]  J. Cassell,et al.  Conversation as a System Framework : Designing Embodied Conversational Agents , 1999 .

[50]  Bernard Rimé,et al.  Fundamentals of nonverbal behavior , 1991 .

[51]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[52]  Akikazu Takeuchi,et al.  Communicative facial displays as a new conversational modality , 1993, INTERCHI.

[53]  James C. Lester,et al.  Deictic Believability: Coordinated Gesture, Locomotion, and Speech in Lifelike Pedagogical Agents , 1999, Appl. Artif. Intell..

[54]  Dagmar Schmauks,et al.  Natural and Simulated Pointing , 1987, EACL.

[55]  Norman I. Badler,et al.  Representing and parameterizing agent behaviors , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[56]  W. Johnson,et al.  Task-oriented collaboration with embodied agents in virtual worlds , 2001 .

[57]  Akiba A. Cohen,et al.  The Communicative Functions of Hand I1lustrators , 1977 .

[58]  Yukiko I. Nakano,et al.  MACK: Media lab Autonomous Conversational Kiosk , 2002 .