The Automated Interplay of Multimodal Fission and Fusion in Adaptive HCI

Present context-aware systems gather a lot of information to maximize their functionality but they predominantly use rather static ways to communicate. This paper motivates two components that serve as mediators between arbitrary components for multimodal fission and fusion, aiming to improve communication skills. Along with an exemplary selection scenario we describe the architecture for an automatic cooperation of fusion and fission in a model-driven realization. We describe how the approach supports user-initiative dialog requests as well as user-nominated UI configuration. Despite that, we show how multimodal input conflicts can be solved using a shortcut in the commonly used human-computer interaction loop (HCI loop).

[1]  Rainer Stiefelhagen,et al.  Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures , 2004, ICMI '04.

[2]  Minh Tue Vo,et al.  Building an application framework for speech and pen input integration in multimodal learning interfaces , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[3]  Marc Schröder,et al.  The SEMAINE API: Towards a Standards-Based Framework for Building Emotion-Oriented Systems , 2010, Adv. Hum. Comput. Interact..

[4]  Mohan S. Kankanhalli,et al.  Multimodal fusion for multimedia analysis: a survey , 2010, Multimedia Systems.

[5]  Denis Lalanne,et al.  Benchmarking fusion engines of multimodal interactive systems , 2009, ICMI-MLMI '09.

[6]  Gregor Bertrand,et al.  GEEDI - Guards for Emotional and Explanatory DIalogues , 2010, 2010 Sixth International Conference on Intelligent Environments.

[7]  Sharon L. Oviatt,et al.  Multimodal Interfaces: A Survey of Principles, Models and Frameworks , 2009, Human Machine Interaction.

[8]  Amar Ramdane-Cherif,et al.  A Multi-Agent based Multimodal System Adaptive to the User’s Interaction Context , 2011 .

[9]  Yacine Bellik,et al.  A framework for the intelligent multimodal presentation of information , 2006, Signal Process..

[10]  Carlos Duarte,et al.  Adapting Multimodal Fission to User's Abilities , 2011, HCI.

[11]  Sahin Albayrak,et al.  A meta user interface to control multimodal interaction in smart environments , 2009, IUI.

[12]  Luís Carriço,et al.  A conceptual framework for developing adaptive multimodal applications , 2006, IUI '06.

[13]  Frank Honold,et al.  Adaptive probabilistic fission for multimodal systems , 2012, OZCHI.

[14]  U. Brandes,et al.  GraphML Progress Report ? Structural Layer Proposal , 2001 .

[15]  Sahin Albayrak,et al.  Dynamic user interface distribution for flexible multimodal interaction , 2010, ICMI-MLMI '10.

[16]  Wolfgang Wahlster,et al.  SmartKom: Foundations of Multimodal Dialogue Systems , 2006, SmartKom.

[17]  Sharon Oviatt,et al.  Multimodal Interfaces , 2008, Encyclopedia of Multimedia.

[18]  Sharon L. Oviatt,et al.  From members to teams to committee-a robust approach to gestural and multimodal recognition , 2002, IEEE Trans. Neural Networks.

[19]  Gregor Bertrand,et al.  Context Models for Adaptive Dialogs and Multimodal Interaction , 2013, 2013 9th International Conference on Intelligent Environments.

[20]  Frank Honold,et al.  Using the Transferable Belief Model for Multimodal Input Fusion in Companion Systems , 2012, MPRSS.

[21]  Albrecht Schmidt,et al.  Implicit human computer interaction through context , 2000, Personal Technologies.

[22]  Wolfgang Wahlster,et al.  Dialogue Systems Go Multimodal: The SmartKom Experience , 2006, SmartKom.

[23]  Thierry Ganille,et al.  ICARE software components for rapidly developing multimodal interfaces , 2004, ICMI '04.

[24]  Joëlle Coutaz,et al.  A generic platform for addressing the multimodal challenge , 1995, CHI '95.

[25]  Norbert Pfleger,et al.  Context based multimodal fusion , 2004, ICMI '04.