Speech/Gesture Interface to a Visual-Computing Environment
暂无分享,去创建一个
Vladimir Pavlovic | Klaus Schulten | Thomas S. Huang | Rajeev Sharma | James C. Phillips | Michael Zeller | Yunxin Zhao | Stephen M. Chu | Zion Lo
[1] Philip R. Cohen,et al. QuickSet: multimodal interaction for distributed applications , 1997, MULTIMEDIA '97.
[2] Takeo Kanade,et al. DigitEyes: Vision-Based Human Hand Tracking , 1993 .
[3] Alexander G. Hauptmann,et al. Gestures with Speech for Graphic Manipulation , 1993, Int. J. Man Mach. Stud..
[4] Takeo Kanade,et al. DigitEyes: vision-based hand tracking for human-computer interaction , 1994, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.
[5] Laxmikant V. Kale,et al. MDScope - a visual computing environment for structural biology , 1995 .
[6] Alex Pentland,et al. ALIVE: Artificial Life Interactive Video Environment , 1994, AAAI.
[7] Richard A. Bolt,et al. “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.
[8] Jian Wang,et al. Integration of eye-gaze, voice and manual response in multimodal user interface , 1995, 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century.
[9] Chin-Hui Lee,et al. Automatic recognition of keywords in unconstrained speech using hidden Markov models , 1990, IEEE Trans. Acoust. Speech Signal Process..
[10] Vladimir Pavlovic,et al. Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review , 1997, IEEE Trans. Pattern Anal. Mach. Intell..