Learning Gestures for Customizable Human-Computer Interaction in the Operating Room
暂无分享,去创建一个
[1] Kenton O'Hara,et al. Exploring the potential for touchless interaction in image-guided interventional radiology , 2011, CHI.
[2] Michael Isard,et al. CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.
[3] Mikhail Belkin,et al. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.
[4] C Baur,et al. A non-contact mouse for surgeon-computer interaction. , 2004, Technology and health care : official journal of the European Society for Engineering and Medicine.
[5] Ahmed M. Elgammal,et al. The Role of Manifold Learning in Human Motion Analysis , 2006, Human Motion.
[6] Nassir Navab,et al. Multiple-Activity Human Body Tracking in Unconstrained Environments , 2010, AMDO.
[7] Zhen Wang,et al. uWave: Accelerometer-based Personalized Gesture Recognition and Its Applications , 2009, PerCom.
[8] Joachim Hornegger,et al. 3-D gesture-based scene navigation in medical imaging applications using Time-of-Flight cameras , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.
[9] Ulrich G. Hofmann,et al. Touch- and marker-free interaction with medical software , 2009 .
[10] Norbert Link,et al. Gesture recognition with inertial sensors and optimized DTW prototypes , 2010, 2010 IEEE International Conference on Systems, Man and Cybernetics.
[11] Jani Mäntyjärvi,et al. Accelerometer-based gesture control for a design environment , 2006, Personal and Ubiquitous Computing.
[12] Yael Edan,et al. A Real-Time Hand Gesture Interface for Medical Visualization Applications , 2006 .
[13] Luc Van Gool,et al. Learning Generative Models for Multi-Activity Body Pose Estimation , 2008, International Journal of Computer Vision.