Extending a Dialog Model with Contextual Knowledge

Designing and exploring multimodal interaction techniques, such as those used in virtual environments, can be facilitated by using high-level notations. Besides task modelling, notations have been introduced at the dialog level such as our notation NiMMiT. For advanced interaction techniques, there is not yet an established approach to decide when to stop detailing the task model and continue modelling at the dialog level. Also, context-awareness is usually introduced at the task level and not at the dialog level. We show that this might cause an explosion in the amount of dialog states in situations where context-aware multimodal interaction is used in one and the same task. Therefore, we propose an approach which attempts to introduce contextual knowledge at the dialog level where transitions are chosen upon context information. We validate our approach in a case study from which we conclude that the augmented notation is easy to use and successfully introduces context at the dialog level.

[1]  Bernd Fröhlich,et al.  New Directions in 3D User Interfaces , 2005 .

[2]  Pierre Dragicevic,et al.  Support for input adaptability in the ICON toolkit , 2004, ICMI '04.

[3]  H. James Hoover,et al.  InTml: a description language for VR applications , 2002, Web3D '02.

[4]  Jan Van den Bergh,et al.  Modeling multi-level context influence on the user interface , 2006, Fourth Annual IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOMW'06).

[5]  Ann Blandford,et al.  Four easy pieces for assessing the usability of multimodal interaction: the CARE properties , 1995, INTERACT.

[6]  Marco Winckler,et al.  A Formal Description of Multimodal Interaction Techniques for Immersive Virtual Reality Applications , 2005, INTERACT.

[7]  Christian Sandor,et al.  Towards a Development Methodology for Augmented Reality User Interfaces , 2004, MIXER.

[8]  Kris Luyten,et al.  Generating Context-Sensitive Multiple Device Interfaces from Design , 2004, CADUI.

[9]  Philippe A. Palanque,et al.  Engineering Human Computer Interaction and Interactive Systems, Joint Working Conferences EHCI-DSVIS 2004, Hamburg, Germany, July 11-13, 2004, Revised Selected Papers , 2005, EHCI/DS-VIS.

[10]  Karin Coninx,et al.  NIMMIT: A notation for modeling multimodal interaction techniques , 2006, GRAPP.

[11]  Jean Vanderdonckt,et al.  Computer-Aided Design of User Interfaces III , 2002, Springer Netherlands.

[12]  Kris Luyten,et al.  DynaMo-AID: A Design Process and a Runtime Architecture for Dynamic Model-Based User Interface Development , 2004, EHCI/DS-VIS.

[13]  Jean Vanderdonckt,et al.  Open Issues for the development of 3D Multimodal Applications from an MDE perspective , 2006, MDDAUI@MoDELS.

[14]  Jean Vanderdonckt,et al.  Task Modelling for Context-Sensitive User Interfaces , 2001, DSV-IS.

[15]  Karin Coninx,et al.  A Model-Based Design Process for Interactive Virtual Environments , 2005, DSV-IS.

[16]  Michael D. Harrison,et al.  A toolset supported approach for designing and testing virtual environment interaction techniques , 2001, Int. J. Hum. Comput. Stud..

[17]  Jan Van den Bergh,et al.  Contextual ConcurTaskTrees: integrating dynamic contexts in task based design , 2004, IEEE Annual Conference on Pervasive Computing and Communications Workshops, 2004. Proceedings of the Second.

[18]  Fabio Paternò,et al.  One Model, Many Interfaces , 2002, CADUI.

[19]  Fabio Paternò Model-Based Design and Evaluation of Interactive Applications , 2000 .

[20]  Koen De Bosschere,et al.  Towards an Extensible Context Ontology for Ambient Intelligence , 2004, EUSAI.

[21]  Gregory D. Abowd,et al.  Providing architectural support for building context-aware applications , 2000 .

[22]  Karin Coninx,et al.  Task-Based Design and Runtime Support for Multimodal User Interface Distribution , 2008, EHCI/DS-VIS.

[23]  David A. Carr,et al.  Interaction object graphs : an executable graphical notation for specifying user interfaces , 1997 .