From human resources to human rights: Impact assessments for hiring algorithms

Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. First, AI principles have been criticized for being vague and not actionable. Second, the use of vague ethical principles to discuss algorithmic risks does not provide any accountability. This lack of accountability creates an algorithmic accountability gap. Closing this gap is crucial because, without accountability, the use of hiring algorithms can lead to discrimination and unequal access to employment opportunities. This paper makes two contributions to the AI ethics literature. First, it frames the ethical risks of hiring algorithms using international human rights law as a universal standard for determining algorithmic accountability. Second, it evaluates four types of algorithmic impact assessments in terms of how effectively they address the five human rights of job applicants implicated in hiring algorithms. It determines which of the assessments can help companies audit their hiring algorithms and close the algorithmic accountability gap.

[1]  Aaron Rieke,et al.  Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .

[2]  B. Mittelstadt Principles Alone Cannot Guarantee Ethical AI , 2019 .

[3]  Reuben Binns Data Protection Impact Assessments: A Meta-Regulatory Approach , 2016 .

[4]  Jill Baker,et al.  INTERNATIONAL ASSOCIATION FOR IMPACT ASSESSMENT , 2000 .

[5]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[6]  Anthony Ho,et al.  International Association for Impact Assessment (IAIA) 2013 Calgary, Alberta, Canada Paper Title: Strategic Environmental Assessment - Implementation Mechanisms & Tools for the Future , 2013 .

[7]  Kate Crawford,et al.  Health and Big Data: An Ethical Framework for Health Information Collection by Corporate Wellness Programs , 2016, The Journal of law, medicine & ethics : a journal of the American Society of Law, Medicine & Ethics.

[8]  Karen Yeung,et al.  A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility Within a Human Rights Framework , 2018 .

[9]  Brent Mittelstadt,et al.  Principles alone cannot guarantee ethical AI , 2019, Nature Machine Intelligence.

[10]  Yifat Nahmias,et al.  The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations , 2020 .

[11]  Elizabeth E. Joh Feeding the Machine: Policing, Crime Data, & Algorithms , 2017 .

[12]  E. Vayena AD HOC COMMITTEE ON ARTIFICIAL INTELLIGENCE (CAHAI) , 2020 .

[13]  Mark Alfano,et al.  The philosophical basis of algorithmic recourse , 2020, FAT*.

[14]  Anthony F. Heath,et al.  Equality of Opportunity , 2017 .

[15]  Ifeoma Ajunwa,et al.  Algorithms and the Social Organization of Work , 2020 .

[16]  Jeffrey Collins The Corporate Responsibility to Respect Human Rights , 2014, Proceedings of the ASIL Annual Meeting.

[17]  Yang Hao,et al.  Deep learning framework for subject-independent emotion detection using wireless signals. , 2020, PloS one.

[18]  Nora Götzmann Human Rights Impact Assessment of Business Activities: Key Criteria for Establishing a Meaningful Practice , 2017 .

[19]  Daragh Murray,et al.  INTERNATIONAL HUMAN RIGHTS LAW AS A FRAMEWORK FOR ALGORITHMIC ACCOUNTABILITY , 2019, International and Comparative Law Quarterly.

[20]  Peter Cappelli,et al.  Artificial Intelligence in Human Resources Management: Challenges and a Path Forward , 2019, California Management Review.

[21]  Roger Clarke,et al.  Privacy impact assessment: Its origins and development , 2009, Comput. Law Secur. Rev..

[22]  Charles D. Raab,et al.  Information privacy, impact assessment, and the place of ethics⁎ , 2020, Comput. Law Secur. Rev..

[23]  S. Moreira,et al.  Adapting social impact assessment to address a project's human rights impacts and risks , 2017 .

[24]  Madeleine Clare Elish,et al.  Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts , 2021, FAccT.

[25]  Mark Latonero Governing Artificial Intelligence: upholding human rights & dignity , 2018 .

[26]  Alessandro Mantelero,et al.  AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment , 2018, Comput. Law Secur. Rev..

[27]  J. Gilmore Expression as Realization: Speakers’ Interests in Freedom of Speech , 2011 .

[28]  McKenzie Raub,et al.  Bots, Bias and Big Data: Artificial Intelligence, Algorithmic Bias and Disparate Impact Liability in Hiring Practices , 2018 .

[29]  Sonia K. Katyal Private Accountability in an Age of Artificial Intelligence , 2019, The Cambridge Handbook of the Law of Algorithms.

[30]  Solon Barocas,et al.  Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices , 2019, SSRN Electronic Journal.

[31]  P. Kim Data-Driven Discrimination at Work , 2017 .

[32]  Joshua A. Kroll Accountability in Computer Systems , 2020 .

[33]  Filippo A. Raso,et al.  Artificial Intelligence & Human Rights: Opportunities & Risks , 2018 .

[34]  Michael Friedewald,et al.  Integrating privacy and ethical impact assessments , 2013 .

[35]  F. Vanclay,et al.  Social and human rights impact assessments: what can they learn from each other? , 2016 .

[36]  L. Floridi,et al.  A Unified Framework of Five Principles for AI in Society , 2019, Issue 1.

[37]  Artificial Intelligence in Human Resources Management: Challenges and a Path Forward , 2019, California Management Review.

[38]  J. Murphy The General Data Protection Regulation (GDPR) , 2018, Irish medical journal.

[39]  Inioluwa Deborah Raji,et al.  Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing , 2020, FAT*.

[40]  Kimberly A. Houser,et al.  Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making , 2019 .