Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power

This article considers some of the risks and challenges raised by the use of algorithm-assisted decision-making and predictive tools by the public sector. Alongside, it reviews a number of long-standing English administrative law rules designed to regulate the discretionary power of the state. The principles of administrative law are concerned with human decisions involved in the exercise of state power and discretion, thus offering a promising avenue for the regulation of the growing number of algorithm-assisted decisions within the public sector. This article attempts to re-frame key rules for the new algorithmic environment and argues that ‘old’ law—interpreted for a new context—can help guide lawyers, scientists and public sector practitioners alike when considering the development and deployment of new algorithmic tools. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.

[1]  M. Hildebrandt Law as computation in the era of artificial legal intelligence: Speaking law to the power of statistics , 2018 .

[2]  S. Danziger,et al.  Extraneous factors in judicial decisions , 2011, Proceedings of the National Academy of Sciences.

[3]  A. L. Sueur Robot Government: Automated Decision-Making and its Implications for Parliament , 2015 .

[4]  Bruce Edmonds,et al.  The Aqua Book: Guidance on Producing Quality Analysis for Government by HM Treasury , 2016, J. Artif. Soc. Soc. Simul..

[5]  Jake M. Hofman,et al.  Prediction and explanation in social systems , 2017, Science.

[6]  Angèle Christin Algorithms in practice: Comparing web journalism and criminal justice , 2017 .

[7]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.

[8]  P. Meehl,et al.  Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical–statistical controversy. , 1996 .

[9]  Michael Veale,et al.  Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.

[10]  Federico Cabitza,et al.  A giant with feet of clay: on the validity of the data that feed machine learning in medicine , 2017, Organizing for the Digital World.

[11]  J. Pearl,et al.  The Book of Why: The New Science of Cause and Effect , 2018 .

[12]  Foster J. Provost,et al.  Explaining Data-Driven Document Classifications , 2013, MIS Q..

[13]  Glyn Cashwell,et al.  Prediction, persuasion, and the jurisprudence of behaviourism , 2018 .

[14]  R. Berk,et al.  Forecasting Domestic Violence: A Machine Learning Approach to Help Inform Arraignment Decisions , 2016 .

[15]  William M. Grove,et al.  Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures , 1996 .

[16]  Mireille Hildebrandt New Animism in Policing: Re-animating the Rule of Law? , 2016 .

[17]  Demis Hassabis,et al.  Artificial Intelligence: Chess match of the century , 2017, Nature.

[18]  A. Glöckner The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated , 2016, Judgment and Decision Making.

[19]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[20]  Richard A. Berk,et al.  Statistical Procedures for Forecasting Criminal Behavior , 2013 .

[21]  T. Bayes An essay towards solving a problem in the doctrine of chances , 2003 .

[22]  Hany Farid,et al.  The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.

[23]  John Cheney-Lippold,et al.  We Are Data: Algorithms and the Making of Our Digital Selves , 2017 .

[24]  Marion Oswald,et al.  Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality , 2017 .

[25]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[26]  Mireille Hildebrandt,et al.  Privacy As Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning , 2019 .

[27]  Adventures in Risk: Predicting Violent and Sexual Recidivism in Sentencing Law , 2014 .

[28]  Sameer Singh,et al.  “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.

[29]  J. Nicolas Secretary of state for the home department , 2010 .

[30]  Min Kyung Lee Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..

[31]  Garry Kasparov,et al.  Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins , 2017 .

[32]  Keren Weinshall-Margel,et al.  Overlooked factors in the analysis of parole decisions , 2011, Proceedings of the National Academy of Sciences.

[33]  Woodrow Hartzog,et al.  Privacy’s Blueprint: The Battle to Control the Design of New Technologies , 2018 .

[34]  M. C. Sullivan,et al.  Administrative Law , 2006 .

[35]  Cass R. Sunstein,et al.  Simpler : the future of government , 2013 .

[36]  Frank Webster,et al.  What Information Society? , 1994, Inf. Soc..

[37]  A. Konstantinos,et al.  A Comparative Assessment , 2003 .

[38]  Karen Yeung,et al.  ‘Hypernudge’: Big Data as a mode of regulation by design , 2016, The Social Power of Algorithms.

[39]  T. Bayes LII. An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S , 1763, Philosophical Transactions of the Royal Society of London.

[40]  Jamie Grace Clare’s Law, or the national Domestic Violence Disclosure Scheme , 2015 .

[41]  Michael Veale,et al.  The Need for Sensemaking in Networked Privacy and Algorithmic Responsibility , 2018, CHI 2018.

[42]  Sophia Melanson,et al.  We are data: algorithms and the making of our digital selves , 2017 .

[43]  G. Mulgan Big Mind: How Collective Intelligence Can Change Our World , 2017 .

[44]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[45]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[46]  Mireille Hildebrandt,et al.  Law As Computation in the Era of Artificial Legal Intelligence. Speaking Law to the Power of Statistics , 2017 .

[47]  R. Dawes Statistical Prediction versus Clinical Prediction : Improving What Works , 2016 .