Context-Aware Multimodal Human-Computer Interaction

Crisis response and management involve the collaboration of many people. To perform and coordinate their activities, they must rely on detailed and accurate information about the crisis, the environment, and many more factors. To ensure collaboration of emergency services and high-quality care for victims, the ability to supply dynamic and contextually correlated information is necessary. However, current approaches to construct globally consistent views of crises suffer from problems identified in [60]: (a) the setting of events is constantly changing, (b) the information is distributed across geographically distant locations, and (c) the complexity of the crisis management organization makes it difficult and time consuming to collaborate and verify obtained information.

[1]  Zhenke Yang,et al.  Developing Concept-Based User Interface using Icons for Reporting Observations , 2008 .

[2]  Mohammed Yeasin,et al.  Speech-gesture driven multimodal interfaces for crisis management , 2003, Proc. IEEE.

[3]  Simon Keizer,et al.  Reasoning under Uncertainty in Natural Language Dialogue using Bayesian Networks , 2003 .

[4]  Léon J. M. Rothkrantz,et al.  Communication in Crisis Situations Using Icon Language , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[5]  Lee A. Becker,et al.  Vil: a visual inter lingua , 2001 .

[6]  Roger C. Schank,et al.  SCRIPTS, PLANS, GOALS, AND UNDERSTANDING , 1988 .

[7]  Michael F. McTear,et al.  Book Review: Spoken Dialogue Technology: Toward the Conversational User Interface, by Michael F. McTear , 2002, CL.

[8]  Colin Beardon,et al.  CD-Icon: an iconic language based on conceptual dependencyBrighton , 1992, Intell. Tutoring Media.

[9]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, International Journal of Computer Vision.

[10]  Alan F. Blackwell,et al.  Dasher—a data entry interface using continuous gestures and language models , 2000, UIST '00.

[11]  Samit Bhattacharya,et al.  Vernacular Education and Communication Tool for the People with Multiple Disabilities , 2002 .

[12]  Ute J. Dymon,et al.  An analysis of emergency map symbology , 2003 .

[13]  Paul A. Viola,et al.  Robust Real-time Object Detection , 2001 .

[14]  Wolfgang Wahlster,et al.  SmartKom: Foundations of Multimodal Dialogue Systems , 2006, SmartKom.

[15]  Daphne Bavelier,et al.  Brain and Language a Perspective from Sign Language , 1998, Neuron.

[16]  Bob Carpenter,et al.  The logic of typed feature structures , 1992 .

[17]  Giuseppe Polese,et al.  Iconic language design for people with significant speech and multiple impairments , 1994, Assistive Technology and Artificial Intelligence.

[18]  Katsuki Fujisawa,et al.  Transmitting visual information: icons become words , 2000, 2000 IEEE Conference on Information Visualization. An International Conference on Computer Visualization and Graphics.

[19]  Denis Anson,et al.  The Effects of Word Completion and Word Prediction on Typing Rates Using On-Screen Keyboards , 2006, Assistive technology : the official journal of RESNA.

[20]  Roberta Catizone,et al.  Multimodal Generation in the COMIC Dialogue System , 2005, ACL.

[21]  Elisabeth André,et al.  Exploiting emotions to disambiguate dialogue acts , 2004, IUI '04.

[22]  Jens Edlund,et al.  Adapt - a multimodal conversational dialogue system in an apartment domain , 2000, INTERSPEECH.

[23]  Hong-Kwang Jeff Kuo,et al.  Dialogue management in the Bell Labs communicator system , 2000, INTERSPEECH.

[24]  Jesper Kjeldskov,et al.  Interaction Design for Handheld Computers , 2002 .

[25]  Donald A. Norman,et al.  Things that make us smart , 1979 .

[26]  Anton Nijholt,et al.  A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems , 2007 .

[27]  Wolfgang Wahlster,et al.  Dialogue Systems Go Multimodal: The SmartKom Experience , 2006, SmartKom.

[28]  Nalini Venkatasubramanian,et al.  CAMAS: a citizen awareness system for crisis mitigation , 2004, SIGMOD '04.

[29]  Léon J. M. Rothkrantz,et al.  An Adaptive Keyboard with Personalized Language-Based Features , 2007, TSD.

[30]  Mark Steedman,et al.  Combinatory Categorial Grammar , 2011 .

[31]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[32]  Victor Zue,et al.  JUPlTER: a telephone-based conversational interface for weather information , 2000, IEEE Trans. Speech Audio Process..

[33]  Armando Fox,et al.  The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms , 2002, IEEE Pervasive Comput..

[34]  Timothy F. Cootes,et al.  Active Appearance Models , 1998, ECCV.

[35]  A. Singhal,et al.  Pencils and Photos as Tools of Communicative Research and Praxis , 2006 .

[36]  Louis Boves,et al.  Towards Ambient Intelligence: Multimodal Computers that understand our intentions , 2003 .

[37]  Sharon L. Oviatt,et al.  When do we interact multimodally?: cognitive load and multimodal communication patterns , 2004, ICMI '04.

[38]  Nl,et al.  Intelligent system for exploring dynamic crisis environments , 2006 .

[39]  Léon J. M. Rothkrantz,et al.  Constructing Knowledge of the World in Crisis Situations using Visual Language , 2006, 2006 IEEE International Conference on Systems, Man and Cybernetics.

[40]  Pauline A. Smith Towards a Practical Measure of Hypertext Usability , 1996, Interact. Comput..

[41]  Léon J. M. Rothkrantz,et al.  Dialogue Control in the Alparon System , 2000, TSD.

[42]  Joanna Lumsden,et al.  Handbook of Research on User Interface Design and Evaluation for Mobile Technology , 2008 .

[43]  Mark T. Maybury,et al.  Intelligent user interfaces: an introduction , 1998, IUI '99.

[44]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[45]  Aravind K. Joshi,et al.  Compositional Semantics With Lexicalized Tree-Adjoining Grammar (LTAG): How Much Underspecification is Necessary? , 2001 .

[46]  Léon J. M. Rothkrantz,et al.  Comparison between different feature extraction techniques for audio-visual speech recognition , 2007, Journal on Multimodal User Interfaces.

[47]  Shumin Zhai,et al.  Shorthand writing on stylus keyboard , 2003, CHI '03.

[48]  I. Scott MacKenzie,et al.  Text entry using soft keyboards , 1999, Behav. Inf. Technol..

[49]  P. Ekman Unmasking The Face , 1975 .

[50]  Justine Cassell,et al.  Embodied conversational interface agents , 2000, CACM.

[51]  L. J. M. Rothkrantz,et al.  An Icon-Based Communication Tool on a PDA * , 2004 .

[52]  L. I. Perlovsky Emotions, learning and control , 1999, Proceedings of the 1999 IEEE International Symposium on Intelligent Control Intelligent Systems and Semiotics (Cat. No.99CH37014).

[53]  C. Nieuwenhuis The use of Active Appearance Model for facial expression recognition in crisis environments , 2007 .

[54]  Gregory D. Abowd,et al.  Cirrin: a word-level unistroke keyboard for pen input , 1998, UIST '98.

[55]  Yukiko I. Nakano,et al.  MACK: Media lab Autonomous Conversational Kiosk , 2002 .

[56]  Srini Ramaswamy,et al.  WHISPER - Service Integrated Incident Management System , 2006 .

[57]  Léon J. M. Rothkrantz,et al.  Classification of Public Transport Information Dialogues Using an Information-Based Coding Scheme , 1996, ECAI Workshop on Dialogue Processing in Spoken Language Systems.

[58]  Stan Davis,et al.  Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Se , 1980 .

[59]  Karen Schuchardt,et al.  Developing Concept-Based User Interfaces for Scientific Computing , 2006, Computer.

[60]  Léon J. M. Rothkrantz,et al.  Dynamic Scripting in Crisis Environments , 2007, HCI.

[61]  Yorick Wilks,et al.  Multimodal Dialogue Management in the COMIC Project , 2003 .