A Multimodal Human-Computer Interface for the Control of a Virtual Environment

To further the advances in Human Computer Intelligent Interaction (HCII), we employ approach to integrate two modes of humancomputer communication to control a virtual environment. By using auditory and visual modes in the form of speech and gesture recognition, we outline the control of a task specific virtual environment without the need for traditional large scale virtual reality (VR) interfaces such a wand, mouse, or keyboard. By using features from both speech and gesture, a unique interface is created where different modalities complements each other in a more "human" communication style.

[1]  Alexander G. Hauptmann,et al.  Gestures with Speech for Graphic Manipulation , 1993, Int. J. Man Mach. Stud..

[2]  John R. Kender,et al.  Toward the use of gesture in traditional user interfaces , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[3]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[4]  Vladimir Pavlovic,et al.  Speech/gesture interface to a visual computing environment for molecular biologists , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[5]  Liang Chen,et al.  QuickSet: Multimodal Interaction for Simulation Set-up and Control , 1997, ANLP.