Context-Based Bounding Volume Morphing in Pointing Gesture Application

In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user's intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a p&'rototypical system we have proven the usability of such a system in a qualitative evaluation.

[1]  Johanna D. Moore,et al.  Proceedings of the Conference on Human Factors in Computing Systems , 1989 .

[2]  C. Heinze Modelling Intention Recognition for Intelligent Agent Systems , 2004 .

[3]  Katsuhiko Sakaue,et al.  Arm-pointing gesture interface using surrounded stereo cameras system , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[4]  Sharon L. Oviatt,et al.  Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions , 2000, Hum. Comput. Interact..

[5]  Steven A. Shafer,et al.  XWand: UI for intelligent spaces , 2003, CHI '03.

[6]  Alessandro Valli,et al.  The design of natural interaction , 2008, Multimedia Tools and Applications.

[7]  Arjan Kuijper,et al.  Visual support system for selecting reactive elements in intelligent environments , 2012, 2012 International Conference on Cyberworlds.

[8]  Julie A. Jacko Towards mobile and intelligent interaction environments , 2011 .

[9]  Thomas Kirste,et al.  Supporting goal based interaction with dynamic intelligent environments , 2002, ECAI.

[10]  Karim A. Tahboub,et al.  Journal of Intelligent and Robotic Systems (2005) DOI: 10.1007/s10846-005-9018-0 Intelligent Human–Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition , 2004 .

[11]  John Hart,et al.  ACM Transactions on Graphics: Editorial , 2003, SIGGRAPH 2003.

[12]  Rashid Ansari,et al.  Multimodal human discourse: gesture and speech , 2002, TCHI.

[13]  Xiang Cao,et al.  VisionWand: interaction techniques for large displays using a passive wand tracked in 3D , 2003, UIST '03.

[14]  Ben Shneiderman,et al.  Tree visualization with tree-maps: 2-d space-filling approach , 1992, TOGS.

[15]  Andreas Braun,et al.  Passive Identification and Control of Arbitrary Devices in Smart Environments , 2011, HCI.