Development of Event-Driven Dialogue System for Social Mobile Robot

This paper is a part of an ongoing project designed to develop a multimodal mobile social robot for office and home environment. An event driven dialogue system architecture is proposed to integrate different components of spoken dialogue system as well as various agents for vision understanding, navigation and radio-frequency identification (RFID) through a number of events and messages. Speech recognition is powered by our in-house developed multilingual, speaker independent phonetic speech recognition engine. A template-based cum rule-based language generation paradigm is proposed to render the interaction so that the dialogue can be evolved based on the context and task domain. Three levels of error recovery strategy are employed to deal with different types of errors. The successful implementation of the proposed system on our experimental social mobile robot is a good showcase toward the integration of spoken language technology with other modalities.