Multimodal User Interface for Mission Planning

User 1 This paper1 presents a multimodal interface featuring fusion of multiple modalities for natural human-computer interaction. The architecture of the interface and the methods applied are described, and the results of the real-time multimodal fusion are analyzed. The research in progress concerning a mission planning scenario is discussed and other possible future directions are also presented.

[1]  James L. Flanagan,et al.  Robust distant-talking speech recognition , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[2]  Grigore C. Burdea,et al.  Force and Touch Feedback for Virtual Reality , 1996 .

[3]  Nils J. Nilsson,et al.  Artificial Intelligence , 1974, IFIP Congress.

[4]  Grigore C. Burdea,et al.  Modeling of the `Rutgers Master II' haptic display , 1995 .

[5]  James F. Allen Natural language understanding , 1987, Bejnamin/Cummings series in computer science.

[6]  Vladimir Pavlovic,et al.  Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Ivan Marsic,et al.  Issues in measuring the benefits of multimodal interfaces , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[8]  Anees Shaikh,et al.  An Architecture for Multimodal Information Fusion , 1997 .

[9]  Hanan Samet,et al.  MARCO: MAp Retrieval by COntent , 1996, IEEE Trans. Pattern Anal. Mach. Intell..