Expressing Robot Incapability

Our goal is to enable robots to express their incapability, and to do so in a way that communicates both what they are trying to accomplish and why they are unable to accomplish it. We frame this as a trajectory optimization problem: maximize the similarity between the motion expressing incapability and what would amount to successful task execution, while obeying the physical limits of the robot. We introduce and evaluate candidate similarity measures, and show that one in particular generalizes to a range of tasks, while producing expressive motions that are tailored to each task. Our user study supports that our approach automatically generates motions expressing incapability that communicate both what and why to end-users, and improve their overall perception of the robot and willingness to collaborate with it in the future.

[1]  E. Aronson,et al.  The effect of a pratfall on increasing interpersonal attractiveness , 1966 .

[2]  C. Breazeal,et al.  That Certain Look: Social Amplification of Animate Vision , 2000 .

[3]  Monica N. Nicolescu,et al.  Learning and interacting in human-robot domains , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[4]  Kazuki Kobayashi,et al.  Informing a User of Robot’s Mind by Motion , 2005 .

[5]  Pamela J. Hinds,et al.  Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[6]  Takayuki Kanda,et al.  Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[7]  Jerry E. Pratt,et al.  Human-robot team navigation in visually complex environments , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Siddhartha S. Srinivasa,et al.  Gracefully mitigating breakdowns in robotic services , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Takeo Kanade,et al.  Automated Construction of Robotic Manipulation Programs , 2010 .

[10]  Andrea Lockerd Thomaz,et al.  Generating anticipation in robot motion , 2011, 2011 RO-MAN.

[11]  Wendy Ju,et al.  Expressing thought: Improving robot readability with animation principles , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  Hadas Kress-Gazit,et al.  Sorry Dave, I'm Afraid I Can't Do That: Explaining Unachievable Robot Tasks Using Natural Language , 2013, Robotics: Science and Systems.

[13]  Holly A. Yanco,et al.  Robot confidence and trust alignment , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[14]  Karon E. MacLean,et al.  Analysis of task-based gestures in human-robot interaction , 2013, 2013 IEEE International Conference on Robotics and Automation.

[15]  Siddhartha S. Srinivasa,et al.  Legibility and predictability of robot motion , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[16]  Karon E. MacLean,et al.  Gestures for industry Intuitive human-robot communication from human observation , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[17]  Holly A. Yanco,et al.  Impact of robot failures and feedback on real-time trust , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[18]  Siddhartha S. Srinivasa,et al.  CHOMP: Covariant Hamiltonian optimization for motion planning , 2013, Int. J. Robotics Res..

[19]  Pieter Abbeel,et al.  Motion planning with sequential convex optimization and convex collision checking , 2014, Int. J. Robotics Res..

[20]  Bilge Mutlu,et al.  Communication of Intent in Assistive Free Flyers , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Gunnar Bolmsjö,et al.  How to transfer information between collaborating human operators and industrial robots in an assembly , 2014, NordiCHI.

[22]  Ross A. Knepper,et al.  Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.

[23]  Siddhartha S. Srinivasa,et al.  Perceived robot capability , 2015, 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[24]  Manuel Lopes,et al.  Facilitating intention prediction for humans by optimizing robot motions , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[25]  Siddhartha S. Srinivasa,et al.  Effects of Robot Motion on Human-Robot Collaboration , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[26]  Holly A. Yanco,et al.  Analysis of reactions towards failures and recovery strategies for autonomous robots , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[27]  Cory J. Hayes,et al.  Exploring implicit human responses to robot mistakes in a learning from demonstration task , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[28]  Kai Oliver Arras,et al.  Errare humanum est: Erroneous robots in human-robot interaction , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[29]  Manfred Tscheligi,et al.  To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot , 2017, Front. Robot. AI.

[30]  Girish Chowdhary,et al.  Intent Communication between Autonomous Vehicles and Pedestrians , 2017, ArXiv.

[31]  David Whitney,et al.  Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays , 2017, ISRR.

[32]  Anca D. Dragan,et al.  Expressive Robot Motion Timing , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[33]  Siddhartha S. Srinivasa,et al.  Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.