Regulatory Mechanisms and Algorithms towards Trust in AI / ML

Recent studies suggest that automated processes that are prevalent in machine learning (ML) and artificial intelligence (AI) can propagate and exacerbate systemic biases in society. This has led to calls for regulatory mechanisms and algorithms that are transparent, trustworthy, and fair. However, it remains unclear what form such mechanisms and algorithms can take. In this paper we survey recent formal advances put forth by the EU, and consider what other mechanisms can be put in place in order to avoid discrimination and enhance fairness when it comes to algorithm design and use. We consider this to be an important first step – enacting this vision will require a concerted effort by policy makers, lawyers and computer scientist alike.

[1]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[2]  R. Rivest Learning Decision Lists , 1987, Machine Learning.

[3]  Ronald E. Robertson,et al.  The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections , 2015, Proceedings of the National Academy of Sciences.

[4]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[5]  D. Pager,et al.  The Sociology of Discrimination: Racial Discrimination in Employment, Housing, Credit, and Consumer Markets. , 2008, Annual review of sociology.

[6]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[7]  Keith Kirkpatrick,et al.  Battling algorithmic bias , 2016, Commun. ACM.

[8]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[9]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[10]  Lada A. Adamic,et al.  Exposure to ideologically diverse news and opinion on Facebook , 2015, Science.

[11]  Matthew Costello,et al.  Who views online extremism? Individual attributes leading to exposure , 2016, Comput. Hum. Behav..

[12]  L. Christophorou Science , 2018, Emerging Dynamics: Science, Energy, Society and Values.

[13]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[14]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[15]  Adelaide V. Finch,et al.  September , 1867, The Hospital.

[16]  Paulo J. G. Lisboa,et al.  Interpretability in Machine Learning - Principles and Practice , 2013, WILF.

[17]  G Macpherson,et al.  A blot on the profession , 1988, British medical journal.