Modeling ontology for multimodal interaction in ubiquitous computing systems

People communicate with each other using different ways, such as words, gestures, etc. to give information about their status, emotions and intentions. But how may this information be described in a way that autonomous systems (e.g. Robots) can react with a human being in a given environment? A multimodal interface allows a more flexible and natural interaction between a user and a computing system. This paper presents a methodological approach for designing an architecture that facilitates the work of a fusion engine. The selection of modalities and the fusion of events invoked by the fusion engine are based upon the definition of an ontology that describes the environment where a multimodal interaction system exists.

[1]  Tyson R. Henry,et al.  Integrating gesture and snapping into a user interface toolkit , 1990, UIST '90.

[2]  Denis Lalanne,et al.  Description languages for multimodal interaction: a set of guidelines and its illustration with SMUIML , 2010, Journal on Multimodal User Interfaces.

[3]  Andreas Holzinger,et al.  Accessible Multimodal Web Pages with Sign Language Translations for Deaf and Hard of Hearing Users , 2009, 2009 20th International Workshop on Database and Expert Systems Application.

[4]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[5]  Rainer Stiefelhagen,et al.  Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures , 2004, ICMI '04.

[6]  Luís Carriço,et al.  A conceptual framework for developing adaptive multimodal applications , 2006, IUI '06.

[7]  Jörn Ostermann,et al.  Multimodal speech synthesis , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[8]  Byeong-Seok Shin,et al.  Wearable Multimodal Interface for Helping Visually Handicapped Persons , 2006, ICAT.

[9]  Sharon L. Oviatt,et al.  Perceptual user interfaces: multimodal interfaces that process what comes naturally , 2000, CACM.

[10]  Niels Henze,et al.  Gesture recognition with a Wii controller , 2008, TEI.

[11]  Roope Raisamo,et al.  Testing usability of multimodal applications with visually impaired children , 2006, IEEE MultiMedia.

[12]  Marc Erich Latoschik Designing transition networks for multimodal VR-interactions using a markup language , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[13]  Norbert Reithinger,et al.  SmartKom: adaptive and flexible multimodal access to multiple applications , 2003, ICMI '03.

[14]  Jennifer C. Lai,et al.  Examining modality usage in a conversational multimodal application for mobile e-mail access , 2007, Int. J. Speech Technol..