A multimodal labeling interface for wearable computing
暂无分享,去创建一个
[1] Yunde Jia,et al. A miniature stereo vision machine (MSVM-III) for dense disparity mapping , 2004, ICPR 2004.
[2] Luc Van Gool,et al. SURF: Speeded Up Robust Features , 2006, ECCV.
[3] James A. Landay,et al. VoiceLabel: using speech to label mobile sensor data , 2008, ICMI '08.
[4] Ali H. Sayed,et al. SNAP&TELL: a multi-modal wearable computer interface for browsing the environment , 2002, Proceedings. Sixth International Symposium on Wearable Computers,.
[5] Yang Liu,et al. Hand-Gesture Based Text Input for Wearable Computers , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).
[6] Gunther Heidemann,et al. Multimodal interaction in an augmented reality scenario , 2004, ICMI '04.
[7] Matti Pietikäinen,et al. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns , 2002, IEEE Trans. Pattern Anal. Mach. Intell..
[8] OjalaTimo,et al. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns , 2002 .
[9] Tobias Höllerer,et al. Multimodal interaction with a wearable augmented reality system , 2006, IEEE Computer Graphics and Applications.
[10] Alexander H. Waibel,et al. Smart Sight: a tourist assistant system , 1999, Digest of Papers. Third International Symposium on Wearable Computers.
[11] Helge J. Ritter,et al. Interactive image data labeling using self-organizing maps in an augmented reality scenario , 2005, Neural Networks.