Learning Gestures for Customizable Human-Computer Interaction in the Operating Room

Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.

[1]  Kenton O'Hara,et al.  Exploring the potential for touchless interaction in image-guided interventional radiology , 2011, CHI.

[2]  Michael Isard,et al.  CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.

[3]  Mikhail Belkin,et al.  Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.

[4]  C Baur,et al.  A non-contact mouse for surgeon-computer interaction. , 2004, Technology and health care : official journal of the European Society for Engineering and Medicine.

[5]  Ahmed M. Elgammal,et al.  The Role of Manifold Learning in Human Motion Analysis , 2006, Human Motion.

[6]  Nassir Navab,et al.  Multiple-Activity Human Body Tracking in Unconstrained Environments , 2010, AMDO.

[7]  Zhen Wang,et al.  uWave: Accelerometer-based Personalized Gesture Recognition and Its Applications , 2009, PerCom.

[8]  Joachim Hornegger,et al.  3-D gesture-based scene navigation in medical imaging applications using Time-of-Flight cameras , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[9]  Ulrich G. Hofmann,et al.  Touch- and marker-free interaction with medical software , 2009 .

[10]  Norbert Link,et al.  Gesture recognition with inertial sensors and optimized DTW prototypes , 2010, 2010 IEEE International Conference on Systems, Man and Cybernetics.

[11]  Jani Mäntyjärvi,et al.  Accelerometer-based gesture control for a design environment , 2006, Personal and Ubiquitous Computing.

[12]  Yael Edan,et al.  A Real-Time Hand Gesture Interface for Medical Visualization Applications , 2006 .

[13]  Luc Van Gool,et al.  Learning Generative Models for Multi-Activity Body Pose Estimation , 2008, International Journal of Computer Vision.