Communicating with Teams of Cooperative Robots

We are designing and implementing a multi-modal interface to a team of dynamically autonomous robots. For this interface, we have elected to use natural language and gesture. Gestures can be either natural gestures perceived by a vision system installed on the robot, or they can be made by using a stylus on a Personal Digital Assistant. In this paper we describe the integrated modes of input and one of the theoretical constructs that we use to facilitate cooperation and collaboration among members of a team of robots. An integrated context and dialog processing component that incorporates knowledge of spatial relations enables cooperative activity between the multiple agents, both human and robotic.

[1]  Candace L. Sidner,et al.  Attention, Intentions, and the Structure of Discourse , 1986, CL.

[2]  James F. Allen,et al.  Toward Conversational Human-Computer Interaction , 2001, AI Mag..

[3]  James M. Keller,et al.  Spatial relations for tactical robot navigation , 2001, SPIE Defense + Commercial Sensing.

[4]  Alan C. Schultz,et al.  Goal tracking in a natural language interface: towards achieving adjustable autonomy , 1999, Proceedings 1999 IEEE International Symposium on Computational Intelligence in Robotics and Automation. CIRA'99 (Cat. No.99EX375).

[5]  James M. Keller,et al.  Generating linguistic spatial descriptions from sonar readings using the histogram of forces , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[6]  Martha E. Pollack,et al.  Towards Focused Plan Monitoring: A Technique and an Application to Mobile Robots , 2000, Auton. Robots.

[7]  Barbara Hayes-Roth,et al.  Intelligent Control , 1994, Artif. Intell..

[8]  Alan C. Schultz,et al.  Integrating natural language and gesture in a robotics domain , 1998, Proceedings of the 1998 IEEE International Symposium on Intelligent Control (ISIC) held jointly with IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA) Intell.

[9]  Sarit Kraus,et al.  Planning and Acting Together , 1999, AI Mag..

[10]  Marjorie Skubic,et al.  Using spatial language in a human-robot dialog , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[11]  Kenneth Wauchope,et al.  Eucalyptus: Integrating Natural Language Input with a Graphical User Interface , 1994 .

[12]  Terrence Fong,et al.  Advanced Interfaces for Vehicle Teleoperation: Collaborative Control, Sensor Fusion Displays, and Remote Driving Tools , 2001, Auton. Robots.

[13]  Alan C. Schultz,et al.  Towards Seamless Integration in a Multi-modal Interface , 2000 .

[14]  Candace L. Sidner,et al.  COLLAGEN: Applying Collaborative Discourse Theory to Human-Computer Interaction , 2001, AI Mag..