This paper discusses design principles for spoken language applications. We focus on those aspects of interface design that are separate from basic speech recognition and that are concerned with the global process of performing a task using a speech interface. Six basic speech user interface design principles are discussed: 1. User plasticity. This property describes how much users can adapt to speech interfaces. 2. Interaction protocol styles. We explain how different interaction protocols for speech interfaces impact on basic task throughput. 3. Error correction. Alternate ways to correct recognition errors are examined. 4. Response Time. The response time requirements of a speech user interface is presented based on experimental results. 5. Task structure. The use of task structure to reduce the complexity of the speech recognition problem is discussed and the resulting benefits are demonstrated. 6. Multi-Modality. The opportunity for integration of several modalities into the interface is evaluated. Since these design principles are different from others for standard applications with typing or pointing, we present experimental support for the importance of these principles as well as perspectives towards solutions and further research. The research described in this paper was sponsored by the Defense Advanced Research P m i * r t c views and conclusions contained in this document are those of the authors and should not be i n t L e t e d ^xzz^s^:~ ~ Table of
[1]
Alexander I. Rudnicky,et al.
Evaluating spoken language interaction
,
1989,
HLT.
[2]
Barbara J. Grosz,et al.
The representation and use of focus in dialogue understanding.
,
1977
.
[3]
Wayne H. Ward,et al.
High level knowledge sources in usable speech recognition systems
,
1990
.
[4]
Wayne H. Ward,et al.
Using Dialog-Level Knowledge Sources to Improve Speech Recognition
,
1988,
AAAI.
[5]
E. Schegloff,et al.
A simplest systematics for the organization of turn-taking for conversation
,
1974
.
[6]
Alexander I. Rudnicky,et al.
Interactive problem solving with speech
,
1988
.
[7]
Alexander I. Rudnicky.
The design of voice-driven interfaces
,
1989,
HLT.
[8]
Alexander G. Hauptmann,et al.
Speech and gestures for graphic image manipulation
,
1989,
CHI '89.
[9]
D.R. Reddy,et al.
Speech recognition by machine: A review
,
1976,
Proceedings of the IEEE.
[10]
Alexander I. Rudnicky,et al.
Talking to Computers: An Empirical Investigation
,
1988,
Int. J. Man Mach. Stud..
[11]
John D. Gould,et al.
Composing letters with a simulated listening typewriter
,
1982,
CHI '82.
[12]
John Lyons,et al.
Language and linguistics
,
1974,
Language Teaching: Abstracts.