COMBINING SPEECH AND EARCONS TO ASSIST MENU NAVIGATION

Previous research on non-speech audio interfaces has demonstrated that they can enhance performance on menu navigation tasks. Most of this work has focused on tasks in which the menu is not spoken and visual representation of the menu is accessible throughout the task. In this paper we explore the potential benefits that earcons, a type of structured sound, might bring to spoken menu systems for which a visual representation is not available. Evaluation of two spoken menu systems that differ only in whether they also employ earcons, indicates that the use of earcons improves task performance by reducing the number of keystrokes required, while also increasing the time spent for each task.