Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging

Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs' outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose 'humanness' is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the 'face' tag, which can be used for surveillance, revealing that people's faces are often recognized by an ITA even when their 'humanness' is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.

[1]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[2]  Abeba Birhane,et al.  Algorithmic Injustices: Towards a Relational Ethics , 2019, ArXiv.

[3]  Zhanyi Hu,et al.  Physiognomy: Personality traits prediction by learning , 2017, Int. J. Autom. Comput..

[4]  N. Haslam,et al.  Dehumanization: A New Perspective , 2007 .

[5]  Emile G. Bruneau,et al.  The ascent of man: Theoretical and empirical evidence for blatant dehumanization. , 2015, Journal of personality and social psychology.

[6]  Franco Turini,et al.  Discrimination-aware data mining , 2008, KDD.

[7]  H. Weinstein,et al.  Rehumanizing the Other: Empathy and Reconciliation , 2004 .

[8]  Piek T. J. M. Vossen,et al.  Talking about other people: an endless range of possibilities , 2018, INLG.

[9]  Maria Giuseppina Pacilli,et al.  SEEING (AND TREATING) OTHERS AS SEXUAL OBJECTS: TOWARD A MORE COMPLETE MAPPING OF SEXUAL OBJECTIFICATION , 2014 .

[10]  Xi Zhang,et al.  Automated Inference on Criminality using Face Images , 2016, ArXiv.

[11]  Jahna Otterbacher,et al.  How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions , 2019, HCOMP.

[12]  David Danks,et al.  Algorithmic Bias in Autonomous Systems , 2017, IJCAI.

[13]  Jens Allwood,et al.  Is Digitalization Dehumanization?—Dystopic Traits of Digitalization , 2017 .

[14]  Sarah Myers,et al.  Ronald J. Deibert, Black Code: Inside the Battle for Cyberspace , 2013 .

[15]  N. Haslam,et al.  Understanding the Relationship between Attribute-Based and Metaphor-Based Dehumanization , 2009 .

[16]  G. Fair Sex and Social Justice , 1999 .

[17]  Judy Hoffman,et al.  Predictive Inequity in Object Detection , 2019, ArXiv.

[18]  Joshua Correll,et al.  The Chicago face database: A free stimulus set of faces and norming data , 2015, Behavior research methods.

[19]  N. Haslam,et al.  Attributing and denying humanness to others , 2008 .

[20]  Seeta Peña Gangadharan,et al.  Decentering technology in discourse on discrimination* , 2019, Information, Communication & Society.

[21]  Kathleen H. Pine,et al.  The Politics of Measurement and Action , 2015, CHI.

[22]  Karrie Karahalios,et al.  Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms , 2014 .

[23]  Emiel van Miltenburg Stereotyping and Bias in the Flickr30K Dataset , 2016, ArXiv.

[24]  Jake Goldenfein,et al.  The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism , 2019, FAT.

[25]  Jahna Otterbacher,et al.  Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images , 2019, ICWSM.

[26]  N. Haslam Dehumanization: An Integrative Review , 2006, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[27]  F. Galton Composite Portraits, Made by Combining Those of Many Different Persons Into a Single Resultant Figure. , 1879 .

[28]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[29]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[30]  Lina Dencik,et al.  Exploring Data Justice: Conceptions, Applications and Directions , 2019, Information, Communication & Society.

[31]  Caitlin Lustig,et al.  Algorithmic Authority: The Case of Bitcoin , 2015, 2015 48th Hawaii International Conference on System Sciences.

[32]  L. Roth Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity , 2009 .

[33]  N. Epley,et al.  Mistaking minds and machines: How speech affects dehumanization and anthropomorphism. , 2016, Journal of experimental psychology. General.

[34]  N. Haslam,et al.  Experiencing Dehumanization: Cognitive and Emotional Effects of Everyday Dehumanization , 2011 .

[35]  Hamid R. Ekbia,et al.  Heteromation and its (dis)contents: The invisible division of labor between humans and machines , 2014, First Monday.

[36]  Aleix M. Martinez,et al.  Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements , 2019, Psychological science in the public interest : a journal of the American Psychological Society.

[37]  Stefan Kopp,et al.  To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability , 2013, International Journal of Social Robotics.

[38]  Helen Nissenbaum,et al.  Shaping the Web: Why the Politics of Search Engines Matters , 2000, Inf. Soc..

[39]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[40]  Henk G. Schmidt,et al.  The structure of computer anxiety: a six-factor model , 2001, Comput. Hum. Behav..

[41]  Inioluwa Deborah Raji,et al.  Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products , 2019, AIES.

[42]  Jahna Otterbacher,et al.  Social B(eye)as: Human and Machine Descriptions of People Images , 2019, ICWSM.

[43]  M. Kosinski,et al.  Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images , 2018, Journal of personality and social psychology.

[44]  Min Kyung Lee Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..

[45]  P. Asaro On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making , 2012, International Review of the Red Cross.

[46]  Don A. Moore,et al.  Algorithm Appreciation: People Prefer Algorithmic To Human Judgment , 2018, Organizational Behavior and Human Decision Processes.