Acquiring Accurate Human Responses to Robots’ Questions

In task-oriented robot domains, a human is often designated as a supervisor to monitor the robot and correct its inferences about its state during execution. However, supervision is expensive in terms of human effort. Instead, we are interested in robots asking non-supervisors in the environment for state inference help. The challenge with asking non-supervisors for help is that they may not always understand the robot’s state or question and may respond inaccurately as a result. We identify four different types of state information that a robot can include to ground non-supervisors when it requests help—namely context around the robot, the inferred state prediction, prediction uncertainty, and feedback about the sensors used for the predicting the robot’s state. We contribute two wizard-of-oz’d user studies to test which combination of this state information increases the accuracy of non-supervisors’ responses. In the first study, we consider a block-construction task and use a toy robot to study questions regarding shape recognition. In the second study, we use our real mobile robot to study questions regarding localization. In both studies, we identify the same combination of information that increases the accuracy of responses the most. We validate that our combination results in more accurate responses than a combination that a set of HRI experts predicted would be best. Finally, we discuss the appropriateness of our found best combination of information to other task-driven robots.

[1]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[2]  Desney S. Tan,et al.  CueTIP: a mixed-initiative interface for correcting handwriting errors , 2006, UIST.

[3]  Bradley R. Schmerl,et al.  Agent-assisted task management that reduces email overload , 2010, IUI '10.

[4]  Herbert H. Clark Talking as if , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[6]  Marisa E. Campbell CHI 2004 , 2004, INTR.

[7]  Terrence Fong,et al.  Robot, asker of questions , 2003, Robotics Auton. Syst..

[8]  Bernt Schiele,et al.  Evaluating the Effects of Displaying Uncertainty in Context-Aware Applications , 2004, UbiComp.

[9]  Takayuki Kanda,et al.  A semi-autonomous communication robot — A field trial at a train station , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[10]  Mary Shaw,et al.  Topes: enabling end-user programmers to validate and reformat data , 2009 .

[11]  Thomas G. Dietterich,et al.  Toward harnessing user feedback for machine learning , 2007, IUI '07.

[12]  Sriram Subramanian,et al.  Talking about tactile experiences , 2013, CHI.

[13]  JonesRosie,et al.  Active Learning with Feedback on Features and Instances , 2006 .

[14]  Satoru Hayamizu,et al.  Socially Embedded Learning of the Office-Conversant Mobil Robot Jijo-2 , 1997, IJCAI.

[15]  M. Eagle,et al.  RECALL AND RECOGNITION IN INTENTIONAL AND INCIDENTAL LEARNING. , 1964, Journal of experimental psychology.

[16]  Eric Horvitz,et al.  Principles of mixed-initiative user interfaces , 1999, CHI '99.

[17]  W. Keith Edwards,et al.  Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..

[18]  Mica R. Endsley,et al.  Being Certain About Uncertainty: How the Representation of System Reliability Affects Pilot Decision Making , 1998 .

[19]  Stephanie Rosenthal,et al.  How robots' questions affect the accuracy of the human responses , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[20]  Satoru Hayamizu,et al.  Acquiring a Probabilistic Map with Dialogue-Based Learning , 1998 .

[21]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[22]  Nigel Shadbolt,et al.  The empirical study of knowledge elicitation techniques , 1989, SGAR.

[23]  David A. Cohn,et al.  Improving generalization with active learning , 1994, Machine Learning.

[24]  Stephanie Rosenthal,et al.  Is Someone in this Office Available to Help Me? , 2012, J. Intell. Robotic Syst..

[25]  Manuela M. Veloso,et al.  WiFi localization and navigation for autonomous indoor mobile robots , 2010, 2010 IEEE International Conference on Robotics and Automation.

[26]  Paul A. Viola,et al.  Corrective feedback and persistent learning for information extraction , 2006, Artif. Intell..

[27]  Paul Green,et al.  The Rapid Development of User Interfaces: Experience with the Wizard of OZ Method , 1985 .

[28]  Yiannis Demiris,et al.  Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction , 2008, HRI 2008.

[29]  Stephanie Rosenthal,et al.  An effective personal mobile robot agent through symbiotic human-robot interaction , 2010, AAMAS.

[30]  H. H. Clark,et al.  Referring as a collaborative process , 1986, Cognition.

[31]  Weng-Keen Wong,et al.  Integrating rich user feedback into intelligent user interfaces , 2008, IUI '08.

[32]  Gregory D. Abowd,et al.  OOPS: a toolkit supporting mediation techniques for resolving ambiguity in recognition-based interfaces , 2000, Comput. Graph..

[33]  Wendy A. Kellogg,et al.  Social translucence: an approach to designing systems that support social processes , 2000, TCHI.

[34]  Hema Raghavan,et al.  Active Learning with Feedback on Features and Instances , 2006, J. Mach. Learn. Res..

[35]  Sean M. McNee,et al.  Confidence Displays and Training in Recommender Systems , 2003, INTERACT.

[36]  Jean Scholtz,et al.  Beyond usability evaluation: analysis of human-robot interaction at a major robotics competition , 2004 .

[37]  M. Tscheligi,et al.  Robots asking for directions: the willingness of passers-by to support robots , 2010, HRI 2010.

[38]  Siddhartha S. Srinivasa,et al.  Gracefully mitigating breakdowns in robotic services , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).