Adapting multimodal dialog for the elderly

In this paper, we outline the design of a multimodal interface for a mobile pedestrian navigation system developed within the project COLLATE. The aim of the interface is to adapt to different resource limitations of the user. It takes into account the cognitive load of the user as well as the age. We present an approach on how special acoustic models for elderly speakers can improve speech recognition quality and at the same time provide an information source for user modeling. Three different presentation strategies are presented: unimodal (speech only, graphics only), redundant (speech and graphics providing the same information), and concurrent (minimally overlapped speech and graphics).

[1]  Sharon L. Oviatt,et al.  Perceptual user interfaces: multimodal interfaces that process what comes naturally , 2000, CACM.

[2]  Natalie Liberman,et al.  Recognition of elderly speech and voice-driven document retrieval , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[3]  Jay G. Wilpon,et al.  A study of speech recognition for children and the elderly , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[4]  Sharon L. Oviatt,et al.  Multimodal system processing in mobile environments , 2000, UIST '00.

[5]  Joaquim A. Jorge,et al.  Adaptive tools for the elderly: new devices to cope with age-induced cognitive disabilities , 2001, WUAUC'01.