A Preliminary Study of the Effects of Racialization and Humanness on the Verbal Abuse of Female-Gendered Robots

Recent research has indicated that people engage, and unabashedly so, in the verbal abuse of female-gendered robots. To understand whether this also cuts across racial lines, and furthermore, whether it differs from objectifying treatment of actual people, we conducted a preliminary mixed-methods investigation of online commentary on videos of three such robots -- Bina48, Nadine, and YangYang -- contrasted with commentary on videos of three women with similar identity cues. Analysis of the frequency and nature of abusive commentary suggests that: (1) the verbal abuse of the Bina48 and YangYang (two robots racialized as Black and Asian respectively) is laced with both subtle and overt racism; (2) people more readily engage in the verbal abuse of humanlike robots (versus other people). Not only do these findings reflect a concerning phenomenon, consideration is warranted as to whether people»s engagement in abusive interactions with humanlike robots could impact their subsequent interactions with other people.