Skipping spare information in multimodal inputs during multimodal input fusion
暂无分享,去创建一个
Fang Chen | Yong Sun | Vera Chung | Yu Shi
[1] Sanjeev Kumar,et al. A multimodal learning interface for sketch, speak and point creation of a schedule chart , 2004, ICMI '04.
[2] Fang Chen,et al. The hinge between input and output: understanding the multimodal input fusion results in an agent-based multimodal presentation system , 2008, CHI Extended Abstracts.
[3] Michael Johnston,et al. Integrating multimodal language processing with speech recognition , 2000, INTERSPEECH.
[4] Sharon L. Oviatt,et al. Mutual disambiguation of recognition errors in a multimodel architecture , 1999, CHI '99.
[5] Michael Johnston,et al. Unification-based Multimodal Parsing , 1998, ACL.
[6] Frank Rudzicz. Clavius: Bi-Directional Parsing for Generic Multimodal Interaction , 2006, ACL.
[7] Philip R. Cohen,et al. Unification-based multimodal integration , 1997 .
[8] Fang Chen,et al. An efficient unification-based multimodal language processor in multimodal input fusion , 2007, OZCHI '07.