A multi-expert model for dialogue and behavior control of conversational robots and agents

This paper presents an intelligence model for conversational service robots. It employs modules called experts, each of which is specialized to execute certain kinds of tasks such as performing physical behaviors and engaging in dialogues. Some of the experts take charge in understanding human utterances and deciding robot utterances or actions. The model enables switching and canceling tasks based on recognized human intentions, as well as parallel execution of several tasks. This model specifies the interface that an expert must have, and any kind of expert can be employed if it conforms to the interface. This feature makes the model extensible.

[1]  Norihito Yasuda,et al.  Efficient spoken dialogue control depending on the speech recognition rate and system's database , 2003, INTERSPEECH.

[2]  Mikio Nakano,et al.  Handling rich turn-taking in spoken dialogue systems , 1999, EUROSPEECH.

[3]  Yonghong Yan,et al.  Universal speech tools: the CSLU toolkit , 1998, ICSLP.

[4]  Mikio Nakano,et al.  A Framework for Building Conversational Agents Based on a Multi-Expert Model , 2008, SIGDIAL Workshop.

[5]  Tsuneaki Kato,et al.  A collaborative dialogue model based on interaction between reactivity and deliberation , 1997, AGENTS '97.

[6]  Joseph Polifroni,et al.  A form-based dialogue manager for spoken language applications , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[7]  Danica Kragic,et al.  An interactive interface for service robots , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[8]  Jennifer Chu-Carroll,et al.  MIMIC: An Adaptive Mixed Initiative Spoken Dialogue System for Information Queries , 2000, ANLP.

[9]  Tetsuya Ogata,et al.  Integrating Topic Estimation and Dialogue History for Domain Selection in Multi-domain Spoken Dialogue Systems , 2007, IEA/AIE.

[10]  Manny Rayner,et al.  Asynchronous Dialogue Management: Two Case-Studies , 2009 .

[11]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[12]  Mitsuru Ishizuka,et al.  A markup language for describing interactive humanoid robot presentations , 2007, IUI '07.

[13]  Kazuya Takeda,et al.  A study on domain recognition of spoken dialogue systems , 2003, INTERSPEECH.

[14]  Takeo Kanade,et al.  3D ultrasonic tagging system for observing human activity , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[15]  Michael F. McTear,et al.  Cross domain dialogue modelling: an object-based approach , 2004, INTERSPEECH.

[16]  Tsuneo Nitta,et al.  XISL: a language for describing multimodal interaction scenarios , 2003, ICMI '03.

[17]  Mikio Nakano,et al.  A Robot That Can Engage in Both Task-Oriented and Non-Task-Oriented Dialogues , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[18]  Lin-Shan Lee,et al.  A Distributed Agent Architecture for Intelligent Mulit-Domain Spoken Dialogue Systems , 2001 .

[19]  Kiyohiro Shikano,et al.  Julius - an open source real-time large vocabulary recognition engine , 2001, INTERSPEECH.

[20]  Markku Turunen,et al.  Flexible dialogue management using distributed and dynamic dialogue control , 2004, INTERSPEECH.

[21]  Yasushi Makihara,et al.  A Service Robot Acting by Occasional Dialog - Object Recognition Using Dialog with User and Sensor-Based Manipulation - , 2002, J. Robotics Mechatronics.

[22]  Erann Gat,et al.  Integrating Planning and Reacting in a Heterogeneous Asynchronous Architecture for Controlling Real-World Mobile Robots , 1992, AAAI.

[23]  Joachim Denzler,et al.  MOBSY: Integration of vision and dialogue in service robots , 2001, Machine Vision and Applications.

[24]  Akinori Ito,et al.  A spoken dialog system based on automatic grammar generation and template-based weighting for autonomous mobile robots , 2004, INTERSPEECH.

[25]  Nate Blaylock,et al.  Synchronization in an Asynchronous Agent-based architecture for Dialogue Systems , 2002, SIGDIAL Workshop.

[26]  Ben J. A. Kröse,et al.  Jijo-2: An Office Robot that Communicates and Learns , 2001, IEEE Intell. Syst..

[27]  Tatsuya Kawahara,et al.  Domain-independent spoken dialogue platform using key-phrase spotting based on combined language model , 2001, INTERSPEECH.

[28]  Nobuto Matsuhira,et al.  Development of a concept model of a robotic information home appliance, ApriAlpha , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[29]  Michael Beetz Structured Reactive Controllers , 2004, Autonomous Agents and Multi-Agent Systems.

[30]  Lin-Shan Lee,et al.  Consistent dialogue across concurrent topics based on an expert system model , 1999, EUROSPEECH.

[31]  S. Singh,et al.  Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System , 2011, J. Artif. Intell. Res..

[32]  Mikio Nakano,et al.  Robust acquisition and recognition of spoken location names by domestic robots , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[33]  Stanley Peters,et al.  Multi-tasking and Collaborative Activities in Dialogue Systems , 2002, SIGDIAL Workshop.

[34]  Satoshi Nakamura,et al.  Topic classification and verification modeling for out-of-domain utterance detection , 2004, INTERSPEECH.

[35]  António J. S. Teixeira,et al.  An information state based dialogue manager for a mobile robot , 2007, INTERSPEECH.

[36]  James R. Glass,et al.  A Framework for Developing Conversational User Interfaces , 2004, CADUI.