Adaptive Human-Computer Dialogue

It is difficult for a developer to account for all the surface linguistic forms that users might need in a spoken dialogue computer application. In any specific case users might need additional concepts not pre-programmed by the developer. This chapter presents a method for adapting the vocabulary of a spoken dialogue interface at run-time by end-users. The adaptation is based on expanding existing pre-programmed concept classes by adding new concepts in these classes. This adaptation is classified as a supervised learning method in which users are responsible for indicating the concept class and the semantic representation for the new concepts. This is achieved by providing users with a number of rules and ways in which the new language knowledge can be supplied to the computer. Acquisition of new linguistic knowledge at the surface and semantic levels is done using multiple modalities, including speaking, typing, pointing, touching or image capturing. Language knowledge is updated and stored in a semantic grammar and a semantic database.