A framework for the development of context-aware multimodal dialog systems

Context-aware dialog systems must be able to process very heterogeneous information sources and user input modes. This paper is focused on the multimodal input, providing a proposal for the fusion of multiple input modalities in the dialog manager of the system, so that a single combined input is used to select the next system action. We describe a framework to build context-aware multimodal dialog systems that process user’s spoken utterances, tactile and keyboard inputs, and information related to the context of the interaction. Context information is divided in our proposal into external and internal context, user’s internal, represented in our contribution by the detection of their intention during the dialog and their emotional state. We describe a practical application of our technique to build a multimodal dialog system providing context-aware academic information.