Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multimodes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.
[1]
Byung Yong Jeong,et al.
Worker-Centered Design for Working Area in the Electronic Industry
,
2014
.
[2]
Cass R. Sunstein,et al.
Democracy and filtering
,
2004,
CACM.
[3]
Christopher M. Schlick,et al.
A Comparative Study of Multimodal Displays for Multirobot Supervisory Control
,
2007,
HCI.
[4]
Christopher D. Wickens,et al.
Multiple resources and performance prediction
,
2002
.
[5]
Eui S. Jung,et al.
Multi-Modal Controller Usability for Smart TV Control
,
2013
.
[6]
Sharon L. Oviatt,et al.
Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions
,
2000,
Hum. Comput. Interact..
[7]
Sharon L. Oviatt,et al.
Integration themes in multimodal human-computer interaction
,
1994,
ICSLP.
[8]
양정윤,et al.
Model Development for multi-tasking and multi-modality through a case analysis of mobile devices
,
2012
.
[9]
W. Buxton.
Human-Computer Interaction
,
1988,
Springer Berlin Heidelberg.
[10]
James A. Larson,et al.
Guidelines for multimodal user interface design
,
2004,
CACM.
[11]
Gerrit C. van der Veer,et al.
Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
,
1993
.
[12]
M. Angela Sasse,et al.
Successful multiparty audio communication over the Internet
,
1998,
CACM.
[13]
Barry Arons,et al.
VoiceNotes: a speech interface for a hand-held voice notetaker
,
1993,
INTERCHI.