Dynamic generation and refinement of robot verbalization

With a growing number of robots performing autonomously without human intervention, it is difficult to understand what the robots experience along their routes during execution without looking at execution logs. Rather than looking through logs, our goal is for robots to respond to queries in natural language about what they experience and what routes they have chosen. We propose verbalization as the process of converting route experiences into natural language, and highlight the importance of varying verbalizations based on user preferences. We present our verbalization space representing different dimensions that verbalizations can be varied, and our algorithm for automatically generating them on our CoBot robot. Then we present our study of how users can request different verbalizations in dialog. Using the study data, we learn a language model to map user dialog to the verbalization space. Finally, we demonstrate the use of the learned model within a dialog system in order for any user to request information about CoBot's route experience at varying levels of detail.

[1]  Stephanie Rosenthal,et al.  Verbalization: Narration of Autonomous Robot Experience , 2016, IJCAI.

[2]  Allison Sauppé,et al.  Effective task training strategies for human and robot instructors , 2015, Auton. Robots.

[3]  Siddhartha S. Srinivasa,et al.  Gracefully mitigating breakdowns in robotic services , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Weng-Keen Wong,et al.  Too much, too little, or just right? Ways explanations impact end users' mental models , 2013, 2013 IEEE Symposium on Visual Languages and Human Centric Computing.

[5]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[6]  Ron Burns,et al.  Development of the HRL Route Navigation Dialogue System , 2001, HLT.

[7]  Eric Horvitz,et al.  Directions robot: in-the-wild experiences and lessons learned , 2014, AAMAS.

[8]  Peter Stone,et al.  Learning to Interpret Natural Language Commands through Human-Robot Dialog , 2015, IJCAI.

[9]  Lawrence Birnbaum,et al.  StatsMonkey: A Data-Driven Sports Narrative Writer , 2010, AAAI Fall Symposium: Computational Models of Narrative.

[10]  Manuela M. Veloso,et al.  A Team of Humanoid Game Commentators , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[11]  Ladislau Bölöni,et al.  Automatic annotation of team actions in observations of embodied agents , 2007, AAMAS '07.

[12]  Todd Kulesza,et al.  Tell me more?: the effects of mental model soundness on personalizing an intelligent agent , 2012, CHI.

[13]  Manuela M. Veloso,et al.  WiFi localization and navigation for autonomous indoor mobile robots , 2010, 2010 IEEE International Conference on Robotics and Automation.

[14]  M. Neji,et al.  Semantic annotation formalism of video-conferencing documents , 2009 .

[15]  Stephanie Rosenthal,et al.  CoBots: Robust Symbiotic Autonomous Mobile Service Robots , 2015, IJCAI.

[16]  Anind K. Dey Explanations in Context-Aware Systems , 2009, ExaCt.

[17]  Stephanie Rosenthal,et al.  Designing robots for long-term social interaction , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Dympna O'Sullivan,et al.  The Role of Explanations on Trust and Reliance in Clinical Decision Support Systems , 2015, 2015 International Conference on Healthcare Informatics.

[19]  Thomas Rist,et al.  Rocco: A RoboCup Soccer Commentator System , 1998, RoboCup.

[20]  Susan R. Fussell,et al.  Effects of adaptive robot dialogue on information exchange and social relations , 2006, HRI '06.

[21]  Anind K. Dey,et al.  Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.

[22]  D. Saucier,et al.  Males and females scan maps similarly, but give directions differently , 2003, Brain and Cognition.