Investigating Trust Factors in Human-Robot Shared Control: Implicit Gender Bias Around Robot Voice

This paper explores the impact of warnings, audio feedback, and gender on human-robot trust in the context of autonomous driving and specifically shared robot control. We use pre-existing methods for the estimation and assessment of human-robot trust where trust was found to vary as a function of the quality of behavior of an autonomous driving controller. We extend these models and empirical methods to examine the impact of audio cues on trust, specifically studying the impacts of gender-specific audio cues on the elicitation of trust. Our study compares agents with and without human-voiced indicators of uncertainty and evaluates differences in trust with inferred and introspective methods. We find that a person's trust in a robot can be influenced by verbal feedback from the robot agent. Specifically, people tend to lend more trust to agents whose voice is of the same gender as their own.

[1]  Holly A. Yanco,et al.  Modeling trust to improve human-robot interaction , 2012 .

[2]  Justin D. Levinson,et al.  Implicit Gender Bias in the Legal Profession: An Empirical Study , 2010 .

[3]  L. Nygaard,et al.  The Semantics of Prosody: Acoustic and Perceptual Evidence of Prosodic Correlates to Word Meaning , 2009, Cogn. Sci..

[4]  Gregory Dudek,et al.  OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  I. René J. A. te Boekhorst,et al.  Human approach distances to a mechanical-looking robot with different robot voice styles , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[6]  Friederike Eyssel,et al.  ‘If you sound like me, you must be more human’: On the interplay of robot and user features on human-robot acceptance and anthropomorphism , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[7]  Molly Carnes,et al.  Physicians and Implicit Bias: How Doctors May Unwittingly Perpetuate Health Care Disparities , 2013, Journal of General Internal Medicine.

[8]  Gregory Dudek,et al.  Adaptive Parameter EXploration (APEX): Adaptation of robot autonomy from human participation , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Stevan Harnad,et al.  Animal sentience: The other-minds problem , 2016 .

[10]  Arne Jönsson,et al.  Wizard of Oz studies: why and how , 1993, IUI '93.

[11]  Gregory Dudek,et al.  Towards Modeling Real-Time Trust in Asymmetric Human-Robot Collaborations , 2013, ISRR.

[12]  Gregory Dudek,et al.  Maintaining efficient collaboration with trust-seeking robots , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[13]  N. L. Chervany,et al.  THE MEANINGS OF TRUST , 2000 .

[14]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[15]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[16]  N Moray,et al.  Trust, control strategies and allocation of function in human-machine systems. , 1992, Ergonomics.

[17]  Gregory Dudek,et al.  A vision-based boundary following framework for aerial vehicles , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  M. Graham,et al.  Science faculty’s subtle gender biases favor male students , 2012, Proceedings of the National Academy of Sciences.

[19]  Jonathan Brookshire Enhancing Multi-Robot Coordinated Teams with Sliding Autonomy , 2004 .