On Representing Salience and Reference in Multimodal Human-Computer Interaction
暂无分享,去创建一个
Jean-Claude Martin | Adam Cheyer | Luc Julia | Jerry R. Hobbs | Andrew Kehler | J. Hobbs | A. Kehler | Jean-claude Martin | Adam Cheyer | L. Julia
[1] Ellen F. Prince,et al. Toward a taxonomy of given-new information , 1981 .
[2] Candace L. Sidner,et al. Attention, Intentions, and the Structure of Discourse , 1986, CL.
[3] Hiyan Alshawi,et al. Memory and context for language interpretation , 1987 .
[4] Jeanette K. Gundel,et al. Cognitive Status and the Form of Referring Expressions in Discourse , 1993 .
[5] Carla Huls,et al. Automatic Referent Resolution of Deictic and Anaphoric Expressions , 1995, CL.
[6] Adam Cheyer,et al. Multimodal Maps: An Agent-Based Approach , 1995, Multimodal Human-Computer Communication.
[7] Sharon L. Oviatt,et al. Multimodal interfaces for dynamic interactive maps , 1996, CHI.
[8] Sharon Oviatt,et al. Integration and synchronization of input modes during multimodal human-computer interaction , 1997 .
[9] Adam Cheyer,et al. Speech: a privileged modality , 1997, EUROSPEECH.
[10] Antonella De Angeli,et al. Integration and synchronization of input modes during multimodal human-computer interaction , 1997, CHI.
[11] Jean-Claude Martin,et al. A Theoretical Framework for Multimodal User Studies , 1998 .
[12] M. Longair. The Theoretical Framework , 1998 .
[13] Jean-Claude Martin,et al. A Unified Framework for Constructing Multimodal Experiments and Applications , 1998, Cooperative Multimodal Communication.
[14] Adam Cheyer,et al. The Open Agent Architecture , 1997, Autonomous Agents and Multi-Agent Systems.