An Experimental Study of Input Modes for Multimodal Human-Computer Interaction
暂无分享,去创建一个
[1] Takeshi Ohashi,et al. Multimodal interface with speech and motion of stick: CoSMoS , 1995 .
[2] Alexander H. Waibel,et al. Model-based and empirical evaluation of multimodal interactive error correction , 1999, CHI '99.
[3] Alexander G. Hauptmann,et al. Speech and gestures for graphic image manipulation , 1989, CHI '89.
[4] Koichi Sasaki,et al. Multimodal Personal Information Provider Using Natural Language and Emotion Understanding from Speech and Keyboard input , 1996 .
[5] Robert I. Damper,et al. Speech versus keying in command and control applications , 1995, Int. J. Hum. Comput. Stud..
[6] Adam Cheyer,et al. Multimodal Maps: An Agent-Based Approach , 1995, Multimodal Human-Computer Communication.
[7] Antonella De Angeli,et al. Integration and synchronization of input modes during multimodal human-computer interaction , 1997, CHI.
[8] Mathilde M. Bekker,et al. A comparison of mouse and speech input control of a text-annotation system , 1990, Behav. Inf. Technol..
[9] Steve Whittaker,et al. A preliminary analysis of the products of HCI research, using Pro Forma abstracts , 1994, CHI Conference Companion.