Investigating modality selection strategies

This paper describes a user study about the influence of efficiency on modality selection (speech vs. virtual keyboard/ speech vs. physical keyboard) and perceived mental effort. Efficiency was varied in terms of interaction steps. Based on previous research it was hypothesized that the number of necessary interaction steps determines the preference for a specific modality. Moreover the relationship between perceived mental effort, modality selection and efficiency was investigated. Results showed that modality selection is strongly dependent on the number of necessary interaction steps. Task duration and modality selection showed no correlation. Also a relationship between mental effort and modality selection was not observed.

[1]  Alexander I. Rudnicky Mode preference in a simple data-retrieval task , 1993, HLT.

[2]  Paul A. Cairns,et al.  Tlk or txt? Using voice input for SMS composition , 2008, Personal and Ubiquitous Computing.

[3]  Sharon L. Oviatt,et al.  Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions , 2000, Hum. Comput. Interact..

[4]  Roope Raisamo EVALUATING DIFFERENT TOUCH-BASED INTERACTION TECHNIQUES IN A PUBLIC INFORMATION KIOSK , 1999 .

[5]  Sebastian Möller,et al.  The influence of expertise and efficiency on modality selection strategies and perceived mental effort , 2010, INTERSPEECH.

[6]  Dari Trendafilov,et al.  Designing and evaluating multimodal interaction for mobile contexts , 2008, ICMI '08.

[7]  Jean-Luc Gauvain,et al.  User evaluation of the MASK kiosk , 1998, Speech Commun..

[8]  Janienke Sturm,et al.  Effects of prolonged use on the usability of a multimodal form-filling interface , 2004 .

[9]  Florian Metze,et al.  Influence of training on direct and indirect measures for the evaluation of multimodal systems , 2009, INTERSPEECH.

[10]  Stephen A. Brewster,et al.  Investigating the effectiveness of tactile feedback for mobile touchscreens , 2008, CHI.

[11]  Sebastian Möller,et al.  Factors Influencing Modality Choice in Multimodal Applications , 2008, PIT.

[12]  Omer Tsimhoni,et al.  Address Entry While Driving: Speech Recognition Versus a Touch-Screen Keyboard , 2004, Hum. Factors.

[13]  Linda R. Elliott,et al.  Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis , 2006, ICMI '06.

[14]  Sharon Oviatt,et al.  Multimodal Interfaces , 2008, Encyclopedia of Multimedia.

[15]  Tanja Schultz,et al.  Whispery speech recognition using adapted articulatory features , 2005, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005..

[16]  C. Wickens,et al.  Applied Attention Theory , 2007 .

[17]  J H McCracken,et al.  Analyses of Selected LHX Mission Functions: Implications for Operator Workload and System Automation Goals , 1984 .

[18]  Sharon L. Oviatt,et al.  Multimodal interfaces for dynamic interactive maps , 1996, CHI.

[19]  Sebastian Möller,et al.  Reliable Evaluation of Multimodal Dialogue Systems , 2009, HCI.

[20]  Lou Boves,et al.  Effective error recovery strategies for multimodal form-filling applications , 2005, Speech Commun..

[21]  J. Jacko,et al.  The human-computer interaction handbook: fundamentals, evolving technologies and emerging applications , 2002 .

[22]  Jörn Hurtienne,et al.  Multimodale Interaktion: Intuitiv, robust, bevorzugt und altersgerecht? , 2009, Mensch & Computer.

[23]  Emiel Krahmer,et al.  Preferred modalities in dialogue systems , 2000, INTERSPEECH.