Not All Errors Are Created Equal: Exploring Human Responses to Robot Errors with Varying Severity

Robot errors occurring during situated interactions with humans are inevitable and elicit social responses. While prior research has suggested how social signals may indicate errors produced by anthropomorphic robots, most have not explored Programming by Demonstration (PbD) scenarios or non-humanoid robots. Additionally, how human social signals may help characterize error severity, which is important to determine appropriate strategies for error mitigation, has been subjected to limited exploration. We report an exploratory study that investigates how people may react to technical errors with varying severity produced by a non-humanoid robotic arm in a PbD scenario. Our results indicate that more severe robot errors may prompt faster, more intense human responses and that multimodal responses tend to escalate as the error unfolds. This provides initial evidence suggesting temporal modeling of multimodal social signals may enable early detection and classification of robot errors, thereby minimizing unwanted consequences.

[1]  Manfred Tscheligi,et al.  Head and shoulders: automatic error detection in human-robot interaction , 2017, ICMI.

[2]  G. Sandini,et al.  Robots can be perceived as goal-oriented agents , 2013 .

[3]  Kerstin Dautenhahn,et al.  Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Manfred Tscheligi,et al.  Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations , 2015, Front. Psychol..

[5]  Dimosthenis Kontogiorgos,et al.  Embodiment Effects in Interactions with Failing Robots , 2020, CHI.

[6]  Gerald Steinbauer,et al.  A Survey about Faults of Robots Used in RoboCup , 2012, RoboCup.

[7]  Ross A. Knepper,et al.  Recovering from failure by asking for help , 2015, Auton. Robots.

[8]  Tal Oron-Gilad,et al.  Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development , 2018, Front. Psychol..

[9]  Mitsuharu Matsumoto,et al.  Psychological impact on human when a robot makes mistakes , 2013, Proceedings of the 2013 IEEE/SICE International Symposium on System Integration.

[10]  Sara B. Kiesler,et al.  Human Mental Models of Humanoid Robots , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[11]  Kerstin Dautenhahn,et al.  How the Timing and Magnitude of Robot Errors Influence Peoples' Trust of Robots in an Emergency Scenario , 2017, ICSR.

[12]  Dirk Heylen,et al.  Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing , 2012, IEEE Transactions on Affective Computing.

[13]  Holly A. Yanco,et al.  Analysis of reactions towards failures and recovery strategies for autonomous robots , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[14]  Kerstin Eder,et al.  Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[15]  Scott Niekum,et al.  Understanding Teacher Gaze Patterns for Robot Learning , 2019, CoRL.

[16]  Manfred Tscheligi,et al.  Impact of Robot Actions on Social Signals and Reaction Times in HRI Error Situations , 2015, ICSR.

[17]  Duncan P. Brumby,et al.  Detecting errors in pick and place procedures: detecting errors in multi-stage and sequence-constrained manual retrieve-assembly procedures , 2020, IUI.

[18]  Robin R. Murphy,et al.  How UGVs physically fail in the field , 2005, IEEE Transactions on Robotics.

[19]  Karrie Karahalios,et al.  VCode and VData: illustrating a new framework for supporting the video annotation workflow , 2008, AVI '08.

[20]  Manuel Giuliani,et al.  Static and Temporal Differences in Social Signals Between Error-Free and Erroneous Situations in Human-Robot Collaboration , 2019, ICSR.

[21]  Bilge Mutlu,et al.  Using gaze patterns to predict task intent in collaboration , 2015, Front. Psychol..

[22]  Chris Melhuish,et al.  On the impact of different types of errors on trust in human-robot interaction: Are laboratory-based HRI experiments trustworthy? , 2019 .

[23]  Elizabeth J. Carter,et al.  Take One For the Team: The Effects of Error Severity in Collaborative Tasks with Social Robots , 2019, IVA.

[24]  Bilge Mutlu,et al.  Anticipatory robot control for efficient human-robot collaboration , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[25]  Leo Lentz,et al.  Retrospective think-aloud method: using eye movements as an extra cue for participants' verbalizations , 2011, CHI.

[26]  Dimosthenis Kontogiorgos,et al.  Behavioural Responses to Robot Conversational Failures , 2020, 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[27]  E. Aronson,et al.  The effect of a pratfall on increasing interpersonal attractiveness , 1966 .

[28]  J. M. Digman PERSONALITY STRUCTURE: EMERGENCE OF THE FIVE-FACTOR MODEL , 1990 .

[29]  Manfred Tscheligi,et al.  To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot , 2017, Front. Robot. AI.

[30]  Reuben M. Aronson Gaze for Error Detection During Human-Robot Shared Manipulation , 2010 .

[31]  Clifford Nass,et al.  The media equation - how people treat computers, television, and new media like real people and places , 1996 .