Algorithmic Transparency and Accountability in Practice

This position paper aims to contribute to the debate on algorithmic transparency and accountability, relating it to compliance with the so-called right to an explanation in EU data protection law. We propose a research agenda based on legal-empirical data, that will constitute the basis for pinpointing key issues, evidence-based policy guidance and conducting further interdisciplinary research. Based on this research agenda, we are preparing the co-creation of a concrete prototype for making recommendation algorithms for news curation understandable to the average individual. This position paper is the result of a collaboration between two research centres, with expertise in law (CiTiP) and Human-Computer Interaction (Mintlab), enabling a more holistic perspective on a critical societal issue.

[1]  Michael Veale,et al.  Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? , 2018, IEEE Security & Privacy.

[2]  Mike Ananny,et al.  Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..

[3]  Jef Ausloos,et al.  Shattering One-Way Mirrors. Data Subject Access Rights in Practice , 2018 .

[4]  Giovanni Comandé,et al.  Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation , 2017 .

[5]  Jeanna Neefe Matthews,et al.  Toward algorithmic transparency and accountability , 2017, Commun. ACM.

[6]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[7]  Anton Vedder,et al.  Accountability for the use of algorithms in a big data environment , 2017 .

[8]  Seda Gürses,et al.  Privacy after the Agile Turn , 2016 .

[9]  René F. Kizilcec How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface , 2016, CHI.

[10]  Nicholas Diakopoulos,et al.  Accountability in algorithmic decision making , 2016, Commun. ACM.

[11]  K. Karahalios,et al.  "I always assumed that I wasn't really that close to [her]": Reasoning about Invisible Algorithms in News Feeds , 2015, CHI.

[12]  Lora Aroyo,et al.  The effects of transparency on trust in and acceptance of a content-based art recommender , 2008, User Modeling and User-Adapted Interaction.