Unfair! Perceptions of Fairness in Human-Robot Teams

How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people’s perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people’s perception of fairness. First, we extend prior fairness metrics based on team members’ capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people’s assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot’s time and teammates’ capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance.

[1]  Luca Oneto,et al.  Fairness in Machine Learning , 2020, INNSBDDL.

[2]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[3]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[4]  F. Warneken,et al.  The developmental foundations of human fairness , 2017, Nature Human Behaviour.

[5]  BEN GREEN,et al.  The Principles and Limits of Algorithm-in-the-Loop Decision Making , 2019, Proc. ACM Hum. Comput. Interact..

[6]  Ben Hutchinson,et al.  50 Years of Test (Un)fairness: Lessons for Machine Learning , 2018, FAT.

[7]  Michael Carl Tschantz,et al.  Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics , 2019, ICML.

[8]  Stefanos Nikolaidis,et al.  Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategy , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Andrea Lockerd Thomaz,et al.  Defining Fairness in Human-Robot Teams , 2020, 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).

[10]  J. Rawls,et al.  A Theory of Justice , 1971, Princeton Readings in Political Thought.

[11]  J. Gregory Trafton,et al.  Enabling effective human-robot interaction using perspective-taking in robots , 2005, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[12]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[13]  Malte F. Jung,et al.  Robot-Assisted Tower Construction—A Method to Study the Impact of a Robot’s Allocation Behavior on Interpersonal Dynamics and Collaboration in Groups , 2020 .

[14]  Malte Jung,et al.  Reinforcement Learning with Fairness Constraints for Resource Distribution in Human-Robot Teams , 2019, ArXiv.

[15]  Xuan Zhao,et al.  Do people spontaneously take a robot's visual perspective? , 2015, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[16]  Sami Haddadin,et al.  A Hierarchical Human-Robot Interaction-Planning Framework for Task Allocation in Collaborative Industrial Assembly Processes , 2017, IEEE Robotics and Automation Letters.

[17]  Brian Scassellati,et al.  Transparent role assignment and task allocation in human robot collaboration , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[18]  David M. Sobel,et al.  Preschoolers’ Compliance With Others’ Violations of Fairness Norms: The Roles of Intentionality and Affective Perspective Taking , 2018, Journal of Cognition and Development.

[19]  J. Rawls,et al.  Justice as Fairness: A Restatement , 2001 .

[20]  Cynthia Breazeal,et al.  Improved human-robot team performance using Chaski, A human-inspired plan execution system , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Xun Li,et al.  Research on Teamwork Mechanism and Teamwork Efficiency from the Perspective of Fairness Preference , 2011, 2011 International Conference on Computer and Management (CAMAN).

[22]  Christian Haas,et al.  Fairness in Machine Learning: A Survey , 2020, ACM Comput. Surv..

[23]  Karen S. Cook,et al.  Distributive Justice: A Social-Psychological Perspective. , 1986 .

[24]  James A. Geschwender,et al.  Relative deprivation and social justice : a study of attitudes to social inequality in twentieth-century England , 1966 .

[25]  A. Galinsky,et al.  Perspective-taking: decreasing stereotype expression, stereotype accessibility, and in-group favoritism. , 2000, Journal of personality and social psychology.

[26]  V. Groom,et al.  Can robots be teammates?: Benchmarks in human–robot teams , 2007 .

[27]  Malte F. Jung,et al.  Multi-Armed Bandits with Fairness Constraints for Distributing Resources to Human Teammates , 2020, 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[28]  J. Gibbs,et al.  Fairness and Trust in Developmental Psychology , 2016 .

[29]  Quinetta M. Roberson,et al.  Justice in Teams: A Review of Fairness Effects in Collective Contexts , 2005 .

[30]  Ben Green,et al.  Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments , 2019, FAT.