Transparent to whom? No algorithmic accountability without a critical audience

ABSTRACT Big data and data science transform organizational decision-making. We increasingly defer decisions to algorithms because machines have earned a reputation of outperforming us. As algorithms become embedded within organizations, they become more influential and increasingly opaque. Those who create algorithms may make arbitrary decisions in all stages of the ‘data value chain’, yet these subjectivities are obscured from view. Algorithms come to reflect the biases of their creators, can reinforce established ways of thinking, and may favour some political orientations over others. This is a cause for concern and calls for more transparency in the development, implementation, and use of algorithms in public- and private-sector organizations. We argue that one elementary – yet key – question remains largely undiscussed. If transparency is a primary concern, then to whom should algorithms be transparent? We consider algorithms as socio-technical assemblages and conclude that without a critical audience, algorithms cannot be held accountable.

[1]  Yannis Charalabidis,et al.  Benefits, Adoption Barriers and Myths of Open Data and Open Government , 2012, Inf. Syst. Manag..

[2]  Scott Contreras-Koterbay,et al.  The New Aesthetic and Art: Constellations of the Postdigital , 2016 .

[3]  Mitchell L. Stevens,et al.  A Sociology of Quantification* , 2008, European Journal of Sociology.

[4]  D. Lazer,et al.  The Parable of Google Flu: Traps in Big Data Analysis , 2014, Science.

[5]  Mireille Hildebrandt,et al.  The Dawn of a Critical Transparency Right for the Profiling Era , 2012 .

[6]  L. Floridi,et al.  Data ethics , 2021, Effective Directors.

[7]  Martin Bichler,et al.  Responsible Data Science , 2017, Bus. Inf. Syst. Eng..

[8]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[9]  Ruben Verborgh,et al.  Interoperability and FAIRness through a novel combination of Web technologies , 2017, PeerJ Prepr..

[10]  R. Pielke,et al.  Who decides? Forecasts and responsibilities in the 1997 Red River flood , 1999 .

[11]  Michael Betancourt,et al.  Glitch Art in Theory and Practice: Critical Failures and Post-Digital Aesthetics , 2016 .

[12]  Harold S. Stone,et al.  Introduction to Computer Organization and Data Structures , 1971 .

[13]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[14]  Jeffrey L. Privette,et al.  Scientific Stewardship in the Open Data and Big Data Era - Roles and Responsibilities of Stewards and Other Major Product Stakeholders , 2016, D Lib Mag..

[15]  Franco Bifo Berardi And: Phenomenology of the End , 2015 .

[16]  Lucy Suchman,et al.  Human-Machine Reconfigurations: Plans and Situated Actions , 2006 .

[17]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[18]  S. Barnett,et al.  Philosophical Transactions of the Royal Society A : Mathematical , 2017 .

[19]  Donald MacKenzie,et al.  Constructing the Market Frame: Distributed Cognition and Distributed Framing in Financial Markets , 2007 .

[20]  Veda C. Storey,et al.  Business Intelligence and Analytics: From Big Data to Big Impact , 2012, MIS Q..

[21]  Murtaza Haider,et al.  Beyond the hype: Big data concepts, methods, and analytics , 2015, Int. J. Inf. Manag..

[22]  Jochen Runde,et al.  Technological Objects, Social Positions, and the Transformational Model of Social Activity , 2013, MIS Q..

[23]  Abhijit Dasgupta,et al.  Practical Data Science Cookbook , 2014 .

[24]  Alan Liu,et al.  The Laws of Cool: Knowledge Work and the Culture of Information , 2004 .

[25]  Betti Marenko,et al.  When making becomes divination: Uncertainty and contingency in computational glitch-events , 2015 .

[26]  L. Parisi Contagious Architecture: Computation, Aesthetics, and Space , 2013 .

[27]  Mark Andrejevic,et al.  Estrangement 2.0 , 2011 .

[28]  Marvin Minsky,et al.  Matter, Mind and Models , 1965 .

[29]  Donald MacKenzie,et al.  The usefulness of inaccurate models: financial risk management "in the wild" , 2009 .

[30]  Erik Schultes,et al.  The FAIR Guiding Principles for scientific data management and stewardship , 2016, Scientific Data.

[31]  Roger Strand,et al.  Can agent-based models assist decisions on large-scale practical problems? A philosophical analysis , 2000, Complex..

[32]  Jannis Kallinikos,et al.  The Ambivalent Ontology of Digital Artifacts , 2013, MIS Q..

[33]  Steven M. Miller,et al.  Collaborative Approaches Needed to Close the Big Data Skills Gap , 2014 .

[34]  Mireille Hildebrandt Profiling: From data to knowledge , 2006, Datenschutz und Datensicherheit - DuD.

[35]  R. Stuart Geiger,et al.  Bots, bespoke, code and the materiality of software platforms , 2014 .

[36]  Seth Flaxman,et al.  EU regulations on algorithmic decision-making and a "right to explanation" , 2016, ArXiv.

[37]  Gerhard Weikum,et al.  Fides: Towards a Platform for Responsible Data Science , 2017, SSDBM.

[38]  R. Kitchin,et al.  Thinking critically about and researching algorithms , 2014, The Social Power of Algorithms.

[39]  David Heald,et al.  Varieties of transparency , 2006 .

[40]  Martin Kaupenjohann,et al.  Parameters, prediction, post-normal science and the precautionary principle: a roadmap for modelling for decision-making , 2001 .

[41]  Alejandro Rodríguez-González,et al.  Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base , 2016, Front. Plant Sci..

[42]  Marijn Janssen,et al.  The challenges and limits of big data algorithms in technocratic governance , 2016, Gov. Inf. Q..

[43]  Seref Sagiroglu,et al.  Big data: A review , 2013, 2013 International Conference on Collaboration Technologies and Systems (CTS).

[44]  Paul M. Leonardi,et al.  Digital materiality? How artifacts without matter, matter , 2010, First Monday.

[45]  Karrie Karahalios,et al.  Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms , 2014 .

[46]  Simone Natale,et al.  Updating to remain the same: Habitual new media , 2017, New Media Soc..

[47]  Mike Ananny,et al.  Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..

[48]  Raghu Garud,et al.  Calculators, Lemmings or Frame-Makers? The Intermediary Role of Securities Analysts , 2007 .

[49]  Joanna Bryson,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[50]  N. van Doorn,et al.  Platform labor: on the gendered and racialized exploitation of low-income service work in the ‘on-demand’ economy , 2017 .

[51]  Zhuang Fengqing,et al.  Patients’ Responsibilities in Medical Ethics , 2016 .

[52]  Daniel Lathrop,et al.  Open Government: Collaboration, Transparency, and Participation in Practice , 2010 .

[53]  Balázs Bodó,et al.  Tackling the Algorithmic Control Crisis – the Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents , 2018 .

[54]  Alex Singleton,et al.  Putting big data in its place: a Regional Studies and Regional Science perspective , 2015 .

[55]  Alex Pentland,et al.  Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.

[56]  Peter Mork,et al.  From Data to Decisions: A Value Chain for Big Data , 2013, IT Professional.