Incremental Generation of Multimodal Deixis Referring to Objects

This paper describes an approach for the generation of multimodal deixis to be uttered by an anthropomorphic agent in virtual reality. The proposed algorithm integrates pointing and definite description. Doing so, the context-dependent discriminatory power of the gesture determines the content- selection for the verbal constituent. The concept of a pointing cone is used to model the region singled out by a pointing gesture and to distinguish two referential functions called object-pointing and region-pointing.

[1]  Marc Erich Latoschik,et al.  Incorporating VR Databases into AI Knowledge Representations: A Framework for Intelligent Graphics Applications , 2003, Computer Graphics and Imaging.

[2]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[3]  D. McNeill Hand and Mind , 1995 .

[4]  Norbert Reithinger,et al.  The Performance of an Incremental Generation Component for Multi-Modal Dialog Contributions , 1992, NLG.

[5]  C. Peirce,et al.  Collected Papers of Charles Sanders Peirce , 1936, Nature.

[6]  Helmut Horacek,et al.  An Algorithm for Generating Referential Descriptions with Flexible Interfaces , 1997, ACL.

[7]  Ehud Reiter,et al.  Book Reviews: Building Natural Language Generation Systems , 2000, CL.

[8]  Ehud Reitert,et al.  THE COMPUTATIONAL COMPLEXITY OF AVOIDING CONVERSATIONAL IMPLICATURES , 1990, ACL 1990.

[9]  Robert Dale Generating referring expressions - constructing descriptions in a domain of objects and processes , 1992, ACL-MIT press series in natural language processing.

[10]  Marc Erich Latoschik A General Framework for Multimodal Interaction in Virtual Reality Systems: PrOSA , 2001 .

[12]  Claire Gardent,et al.  Generating Minimal Definite Descriptions , 2002, ACL.

[13]  Robert Dale,et al.  Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions , 1995, Cogn. Sci..

[14]  Hannes Rieser,et al.  Pointing in Dialogue , 2004 .

[15]  Emiel Krahmer,et al.  The influence of target size and distance on the production of speech and gesture in multimodal referring expressions , 2004, INTERSPEECH.

[16]  H. Rieser,et al.  Statistical Support for the Study of Structures in Multi-Modal Dialogue : Inter-Rater Agreement and Synchronization , 2004 .

[17]  Stefan Kopp,et al.  MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents , 2002 .

[18]  Emiel Krahmer,et al.  Graph-Based Generation of Referring Expressions , 2003, CL.

[19]  Ipke Wachsmuth,et al.  Deixis in Multimodal Human Computer Interaction: An Interdisciplinary Approach , 2003, Gesture Workshop.

[20]  Norman I. Badler,et al.  A Virtual Human Presenter , 1997 .

[21]  Henrik Tramberend,et al.  Avocado: a distributed virtual reality framework , 1999, Proceedings IEEE Virtual Reality (Cat. No. 99CB36316).

[22]  W. Lewis Johnson,et al.  Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control , 1999, Appl. Artif. Intell..

[23]  Thomas Rist,et al.  Employing AI Methods to Control the Behavior of Animated Interface Agents , 1999, Appl. Artif. Intell..

[24]  Robbert-Jan Beun,et al.  Multimodal Reference to Objects: An Empirical Approach , 1998, Cooperative Multimodal Communication.

[25]  W. Mann Dialogue games: Conventions of human interaction , 1988 .

[26]  Wim Claassen,et al.  Generating Referring Expressions in a Multimodal Environment , 1992, NLG.

[27]  Emiel Krahmer,et al.  A new model for the generation of multimodal referring expressions , 2003 .

[28]  James C. Lester,et al.  Deictic Believability: Coordinated Gesture, Locomotion, and Speech in Lifelike Pedagogical Agents , 1999, Appl. Artif. Intell..