A flexible platform for developing context-aware 3D gesture-based interfaces
暂无分享,去创建一个
In this paper, we introduce a flexible framework that can facilitate the definition of 3D gesture-based interfaces. Highlighting the need for context awareness in complex domains, such as the operating room, we argue how the proposed architecture can overcome integration challenges. Through a real-life scenario, an intra-operative medical image viewer, we demonstrate how the proposed framework can be used in practice to define user interfaces in collaborative environments, where the behavior and the system response can be adapted based on the current workflow stage and individual user requirements. Finally, we demonstrate how the defined interface can be manipulated using a high-level visual programming interface. The extensibility of the proposed architecture makes it applicable to a wide range of scenarios.
[1] Kenton O'Hara,et al. Exploring the potential for touchless interaction in image-guided interventional radiology , 2011, CHI.
[2] Nassir Navab,et al. OR Specific Domain Model for Usability Evaluations of Intra-operative Systems , 2011, IPCAI.
[3] Nassir Navab,et al. An adaptive solution for intra-operative gesture-based human-machine interaction , 2012, IUI '12.
[4] Nassir Navab,et al. Learning Gestures for Customizable Human-Computer Interaction in the Operating Room , 2011, MICCAI.