Bias and Discrimination in AI: a cross-disciplinary perspective

With the widespread and pervasive use of Artificial Intelligence (AI) for automated decision-making systems, AI bias is becoming more apparent and problematic. One of its negative consequences is discrimination: the unfair, or unequal treatment of individuals based on certain characteristics. However, the relationship between bias and discrimination is not always clear. In this paper, we survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions. We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.

[1]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[2]  Salvatore Ruggieri,et al.  A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.

[3]  S. Walby,et al.  Intersectionality: Multiple Inequalities in Social Theory , 2012 .

[4]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[5]  Julia Rubin,et al.  Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).

[6]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[7]  Mireille Hildebrandt,et al.  The Challenges of Ambient Law and Legal Protection in the Profiling Era , 2010 .

[8]  H. Whitehouse,et al.  Is It Good to Cooperate? , 2019, Current Anthropology.

[9]  Matthias Leese,et al.  The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union , 2014 .

[10]  Mike Ananny,et al.  Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..

[11]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[12]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[13]  Natalia Criado,et al.  A Normative approach to Attest Digital Discrimination , 2020, ArXiv.

[14]  Krishna P. Gummadi,et al.  Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning , 2018, AAAI.

[15]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[16]  David Danks,et al.  Algorithmic Bias in Autonomous Systems , 2017, IJCAI.

[17]  Toon Calders,et al.  Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures , 2013, Discrimination and Privacy in the Information Society.

[18]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[19]  Franco Turini,et al.  Integrating induction and deduction for finding evidence of discrimination , 2009, Artificial Intelligence and Law.

[20]  Natalia Criado,et al.  Discovering and Categorising Language Biases in Reddit , 2020, ArXiv.

[21]  Xavier Ferrer Aran,et al.  Transparency for whom? Assessing discriminatory AI , 2020 .

[22]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[23]  William J. Clancey,et al.  Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.

[24]  John Tasioulas,et al.  First Steps Towards an Ethics of Robots and Artificial Intelligence , 2018 .