Managing risks, passing over harms? A commentary on the proposed EU AI Regulation in the context of criminal justice

Artificial Intelligence (AI) systems for both crime prevention and control have been in use for several decades although they have in recent years become the subject of growing criminological attention. Despite its transformative potential for societies, AI in general has long existed in a normative void and has been subject to limited regulation and control. The recent draft of the EU AI Regulation can thus be welcomed as the first comprehensive effort to regulate AI in an attempt to set regional, and potentially global, standards. The approach adopted in the Regulation, however, does not seem to adequately address some of the major concerns surrounding AI when it comes, for instance, to its use in criminal justice arenas. This short intervention discusses how a different approach, focusing on the social harms at stake rather than technological risks, could be useful for overcoming some of the limitations of current regulatory attempts.

[1]  Pamela Ugwudike AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms , 2021, AI and Ethics.

[2]  Michael Veale,et al.  Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach , 2021, Computer Law Review International.

[3]  Pamela Ugwudike,et al.  The datafication revolution in criminal justice: An empirical exploration of frames portraying data-driven technologies for crime prevention and control , 2021, Big Data Soc..

[4]  Nathalie A. Smuha From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence , 2021, Law, Innovation and Technology.

[5]  M. Wood Rethinking how Technologies Harm , 2020 .

[6]  Chris Meyer,et al.  From automats to algorithms: the automation of services using artificial intelligence , 2020 .

[7]  Pamela Ugwudike Digital prediction technologies in the justice system: The implications of a ‘race-neutral’ agenda , 2020, Theoretical Criminology.

[8]  Giuseppina Piscitelli,et al.  Artificial Intelligence and Machine Learning Applications in Smart Production: Progress, Trends, and Directions , 2020 .

[9]  A. Lavorgna Cyber-organised crime. A case of moral panic? , 2018, Trends in Organized Crime.

[10]  Ross Gruetzemacher,et al.  The Transformative Potential of Artificial Intelligence , 2019, Futures.

[11]  A. Corda,et al.  Disordered Punishment: Workaround Technologies of Criminal Records Disclosure and the Rise of a New Penal Entrepreneurialism , 2019, The British Journal of Criminology.

[12]  R. Brownsword,et al.  Law, liberty and technology: criminal justice in the context of smart machines , 2019, International Journal of Law in Context.

[13]  Kimberly R. Kras,et al.  International perspectives on the privatization of corrections , 2019, Criminology & Public Policy.

[14]  Simon Pemberton,et al.  Social harm future(s): exploring the potential of the social harm approach , 2007 .

[15]  Sean P. Hier Risk and panic in late modernity: implications of the converging sites of social anxiety. , 2003, The British journal of sociology.

[16]  S. Ungar Moral panic versus the risk society: the implications of the changing sites of social anxiety. , 2001, The British journal of sociology.

[17]  L. Paoli,et al.  Harm: a Neglected Concept in Criminology, a Necessary Benchmark for Crime-Control Policy , 2013 .

[18]  Elena Esposito,et al.  Digital prophecies and web intelligence , 2013 .

[19]  S. Livingstone e-Youth: (future) policy implications: reflections on online risk, harm and vulnerability , 2010 .

[20]  C. Finkelstein IS RISK A HARM , 2003 .

[21]  Ilkka Niiniluoto,et al.  Should technological imperatives be obeyed , 1990 .