What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems

The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of bias mitigation are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective. In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.

[1]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[2]  Michael Veale,et al.  Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For , 2017 .

[3]  Suresh Venkatasubramanian,et al.  A comparative study of fairness-enhancing interventions in machine learning , 2018, FAT.

[4]  A. Hoffmann Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse , 2019, Information, Communication & Society.

[5]  Ifeoma Ajunwa,et al.  Limitless Worker Surveillance , 2016 .

[6]  Javier Sánchez-Monedero,et al.  The datafication of the workplace , 2019 .

[7]  Julia Rubin,et al.  Fairness Definitions Explained , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).

[8]  L. Macía,et al.  A Charter of Fundamental Rights for the European Union , 2000 .

[9]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[10]  Alexandra Chouldechova,et al.  Does mitigating ML's impact disparity require treatment disparity? , 2017, NeurIPS.

[11]  Judy Wajcman,et al.  Automation: is it really different this time? , 2017, The British journal of sociology.

[12]  John Hagan,et al.  Discrimination and the law , 1985 .

[13]  Deborah Hellman,et al.  Measuring Algorithmic Fairness , 2019 .

[14]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[15]  Kyla Thomas,et al.  The Labor Market Value of Taste: An Experimental Study of Class Bias in U.S. Employment , 2018 .

[16]  Malcolm Sargeant Discrimination and the Law (2nd Edition) , 2017 .

[17]  Teva J. Scheer Uniform Guidelines on Employee Selection Procedures , 2007 .

[18]  Krishna P. Gummadi,et al.  Fairness Constraints: A Flexible Approach for Fair Classification , 2019, J. Mach. Learn. Res..

[19]  Aaron Rieke,et al.  Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .

[20]  Jacqueline Lane,et al.  Indirect discrimination, justification and proportionality: are UK claimants at a disadvantage? , 2017 .

[21]  S. Noble Algorithms of Oppression: How Search Engines Reinforce Racism , 2018 .

[22]  Aaron C. Kay,et al.  Journal of Personality and Social Psychology Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality , 2011 .

[23]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[24]  폴리 프리다,et al.  Systems and methods for data-driven identification of talent , 2015 .

[25]  Carmela Troncoso,et al.  Questioning the assumptions behind fairness solutions , 2018, ArXiv.

[26]  Solon Barocas,et al.  Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices , 2019, SSRN Electronic Journal.

[27]  Danah Boyd,et al.  Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.

[28]  Lina Dencik,et al.  A conceptual framework for approaching social justice in an age of datafication , 2018 .

[29]  M. I. V. Eale,et al.  SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR , 2017 .

[30]  Andrew Robinson,et al.  The quantified self: What counts in the neoliberal workplace , 2016, New Media Soc..

[31]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[32]  Seeta Peña Gangadharan,et al.  Decentering technology in discourse on discrimination* , 2019, Information, Communication & Society.

[33]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[34]  Michael Veale,et al.  Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling , 2018, Comput. Law Secur. Rev..

[35]  Julia Powles,et al.  "Meaningful Information" and the Right to Explanation , 2017, FAT.

[36]  Seth Neel,et al.  Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness , 2017, ICML.

[37]  Daniele Senzani CHARTER OF FUNDAMENTAL RIGHTS OF THE EUROPEAN UNION , 2016 .

[38]  Ifeoma Ajunwa,et al.  Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work , 2018, Work and Labor in the Digital Age.

[39]  Christo Wilson,et al.  Investigating the Impact of Gender on Rank in Resume Search Engines , 2018, CHI.

[40]  Ifeoma Ajunwa,et al.  The Paradox of Automation as Anti-Bias Intervention , 2016 .

[41]  Solon Barocas,et al.  Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices , 2019, SSRN Electronic Journal.

[42]  Javier Sánchez-Monedero,et al.  How to (partially) evaluate automated decision systems , 2018 .

[43]  Ananth Balashankar,et al.  Pareto-Efficient Fairness for Skewed Subgroup Data , 2019 .

[44]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.