Algorithms, Platforms, and Ethnic Bias: An Integrative Essay

Racially biased outcomes have increasingly been recognized as a problem that can infect software algorithms and datasets of all types. Digital platforms, in particular, are organizing ever greater portions of social, political, and economic life. This essay examines and organizes current academic and popular press discussions on how digital tools, despite appearing to be objective and unbiased, may, in fact, only reproduce or, perhaps, even reinforce current racial inequities. However, digital tools may also be powerful instruments of objectivity and standardization. Based on a review of the literature, we have modified and extended a “value chain–like” model introduced by Danks and London (2017), depicting the potential location of ethnic bias in algorithmic decision-making. The model has five phases: input, algorithmic operations, output, users, and feedback. With this model, we identified nine unique types of bias that might occur within these five phases in an algorithmic model: (1) training data bias, (2) algorithmic focus bias, (3) algorithmic processing bias, (4) transfer context bias, (5) misinterpretation bias, (6) automation bias, (7) non-transparency bias, (8) consumer bias, and (9) feedback loop bias. In our discussion, we note some potential benefits from the movement of decisions online, as they are then traceable and amenable to analysis. New social challenges arise as algorithms, and digital platforms that depend on them organize increasingly large portions of social, political, and economic life. Formal regulations, public awareness, and additional academic research are crucial, as algorithms will make or frame decisions, often without awareness by either the creators of the algorithms or those affected by them of biases that might affect those decisions.

[1]  Anupam Chander The Racist Algorithm , 2016 .

[2]  Indre Zliobaite,et al.  Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models , 2016, Artificial Intelligence and Law.

[3]  Salvatore Ruggieri,et al.  A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.

[4]  Josue Ortega,et al.  The Strength of Absent Ties: Social Integration via Online Dating , 2017, ArXiv.

[5]  Lawrence Lessig,et al.  Code and Other Laws of Cyberspace , 1999 .

[6]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[7]  Christopher T. Lowenkamp,et al.  False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks" , 2016 .

[8]  Jure Leskovec,et al.  Human Decisions and Machine Predictions , 2017, The quarterly journal of economics.

[9]  Michael Carl Tschantz,et al.  Discrimination in Online Advertising: A Multidisciplinary Inquiry , 2018 .

[10]  Sonja B. Starr Evidence-Based Sentencing and the Scientific Rationalization of Discrimination , 2013 .

[11]  Pablo J. Boczkowski,et al.  The Relevance of Algorithms , 2013 .

[12]  Nick Seaver Algorithmic Recommendations and Synaptic Functions , 2012 .

[13]  michel Inside China's Vast New Experiment in Social Ranking - e-traces , 2018 .

[14]  L. Winner DO ARTIFACTS HAVE (cid:1) POLITICS? , 2022 .

[15]  Andreas Alexiou,et al.  A Social Strategy: How We Profit from Social Media , 2015 .

[16]  T. Guskey,et al.  GRADING: Why You Should Trust Your Judgment , 2016 .

[17]  R. Courtland Bias detectives: the researchers striving to make algorithms fair , 2018, Nature.

[18]  Brad N. Greenwood,et al.  Race and Gender Bias in Online Ratings: An Origins Story , 2017, ICIS.

[19]  Stuart W. Leslie,et al.  Forces of production : a social history of industrial automation , 1985 .

[20]  Solon Barocas,et al.  Big Data, Data Science, and Civil Rights , 2017, ArXiv.

[21]  Michael Luca,et al.  Digital Discrimination: The Case of Airbnb.com , 2014 .

[22]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[23]  Tal Z. Zarsky,et al.  The Trouble with Algorithmic Decisions , 2016 .

[24]  B. Goodman Economic Models of (Algorithmic) Discrimination , 2016 .

[25]  Alice J. O'Toole,et al.  An other-race effect for face recognition algorithms , 2011, TAP.

[26]  Mike Ananny,et al.  Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability , 2018, New Media Soc..

[27]  Ruha Benjamin,et al.  The rise of the platform economy , 2016 .

[28]  Joichi Ito,et al.  Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment , 2017, FAT.

[29]  M. Kaminski The right to explanation, explained , 2018, Research Handbook on Information Law and Governance.

[30]  Jessica M. Eaglin Constructing Recidivism Risk , 2016 .

[31]  Eric Gossett,et al.  Big Data: A Revolution That Will Transform How We Live, Work, and Think , 2015 .

[32]  Allan G. King,et al.  “Big Data” and the Risk of Employment Discrimination , 2016 .

[33]  Frank A. Pasquale,et al.  [89WashLRev0001] The Scored Society: Due Process for Automated Predictions , 2014 .

[34]  David Danks,et al.  Algorithmic Bias in Autonomous Systems , 2017, IJCAI.

[35]  Anne E. Brown Ridehail Revolution: Ridehail Travel and Equity in Los Angeles , 2018 .

[36]  Jennifer L. Doleac,et al.  The Visible Hand: Race and Online Market Outcomes , 2010 .

[37]  Wanda J. Orlikowski,et al.  Digital Work: A Research Agenda , 2016 .

[38]  Shoshana Zuboff,et al.  In the Age of the Smart Machine: The Future of Work and Power , 1989 .

[39]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[40]  Harry Surden,et al.  Values Embedded in Legal Artificial Intelligence , 2017, IEEE Technology and Society Magazine.

[41]  Hany Farid,et al.  The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.

[42]  Olha Buchel,et al.  Big Data: A Revolution That Will Transform How We Live, Work, and Think , 2015 .

[43]  Sharad Goel,et al.  Combatting Police Discrimination in the Age of Big Data , 2017 .

[44]  Tom LaGatta,et al.  Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification , 2017, Big Data.

[45]  Latanya Sweeney,et al.  Discrimination in online ad delivery , 2013, CACM.

[46]  Paul B. de Laat,et al.  Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability? , 2017, Philosophy & Technology.

[47]  Susan V. Scott,et al.  Reconfiguring relations of accountability: Materialization of social media in the travel sector , 2011 .

[48]  Sorelle A. Friedler,et al.  Hiring by Algorithm: Predicting and Preventing Disparate Impact , 2016 .

[49]  S. Barley Why the Internet Makes Buying a Car Less Loathsome: How Technologies Change Role Relations , 2015 .

[50]  Michael Luca,et al.  Digital Discrimination: The Case of Airbnb.com , 2014 .

[51]  Venoo Kakar,et al.  Effects of Host Race Information on Airbnb Listing Prices in San Francisco , 2016 .

[52]  Paul B. de Laat,et al.  Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability? , 2018 .

[53]  Krishna P. Gummadi,et al.  Potential for Discrimination in Online Targeted Advertising , 2018, FAT.

[54]  Anil K. Jain,et al.  Face Recognition Performance: Role of Demographic Information , 2012, IEEE Transactions on Information Forensics and Security.

[55]  Kirsten E. Martin Ethical Implications and Accountability of Algorithms , 2018, Journal of Business Ethics.

[56]  Karrie Karahalios,et al.  "Be Careful; Things Can Be Worse than They Appear": Understanding Biased Algorithms and Users' Behavior Around Them in Rating Platforms , 2017, ICWSM.

[57]  J. Hisnanick In the age of the smart machine: The future of work and power , 1989 .

[58]  Brian A. Jackson,et al.  Future-proofing justice: building a research agenda to address the effects of technological change on the protection of constitutional rights , 2017 .

[59]  Lindsey Barrett,et al.  Reasonably Suspicious Algorithms: Predictive Policing at the United States Border , 2016 .

[60]  Alex Pentland,et al.  Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.

[61]  Michael Luca,et al.  Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment , 2016 .