Questioning the assumptions behind fairness solutions

In addition to their benefits, optimization systems can have negative economic, moral, social, and political effects on populations as well as their environments. Frameworks like fairness have been proposed to aid service providers in addressing subsequent bias and discrimination during data collection and algorithm design. However, recent reports of neglect, unresponsiveness, and malevolence cast doubt on whether service providers can effectively implement fairness solutions. These reports invite us to revisit assumptions made about the service providers in fairness solutions. Namely, that service providers have (i) the incentives or (ii) the means to mitigate optimization externalities. Moreover, the environmental impact of these systems suggests that we need (iii) novel frameworks that consider systems other than algorithmic decision-making and recommender systems, and (iv) solutions that go beyond removing related algorithmic biases. Going forward, we propose Protective Optimization Technologies that enable optimization subjects to defend against negative consequences of optimization systems.

[1]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[2]  Seda Gürses,et al.  Privacy after the Agile Turn , 2016 .

[3]  Adam Greenfield,et al.  Radical Technologies: The Design of Everyday Life , 2017 .

[4]  K. Henderson,et al.  The Imperative of Leisure Justice Research , 2014 .

[5]  Christos H. Papadimitriou,et al.  Strategic Classification , 2015, ITCS.

[6]  Nicole Immorlica,et al.  The Disparate Effects of Strategic Manipulation , 2018, FAT.

[7]  M. Kearns,et al.  Fairness in Criminal Justice Risk Assessments: The State of the Art , 2017, Sociological Methods & Research.

[8]  Lior Zalmanson,et al.  Hands on the Wheel: Navigating Algorithmic Management and Uber Drivers' Autonomy , 2017, ICIS.

[9]  Parag C. Pendharkar,et al.  A field study of the impact of gender and user's technical experience on the performance of voice-activated medical tracking application , 2004, Int. J. Hum. Comput. Stud..

[10]  Suresh Venkatasubramanian,et al.  Runaway Feedback Loops in Predictive Policing , 2017, FAT.

[11]  Sharad Goel,et al.  The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning , 2018, ArXiv.

[12]  Carmela Troncoso,et al.  POTs: protective optimization technologies , 2018, FAT*.

[13]  Tobias Scheffer,et al.  Stackelberg games for adversarial prediction problems , 2011, KDD.

[14]  Philip E. Agre,et al.  Surveillance and Capture: Two Models of Privacy , 1994, Inf. Soc..

[15]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[16]  Brijesh Joshi,et al.  Touching from a distance: website fingerprinting attacks and defenses , 2012, CCS.

[17]  Ariel Stolerman,et al.  Use Fewer Instances of the Letter "i": Toward Writing Style Anonymization , 2012, Privacy Enhancing Technologies.

[18]  Carter C. Price,et al.  Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations , 2013 .

[19]  Neil D. Lawrence,et al.  Dataset Shift in Machine Learning , 2009 .

[20]  Jon M. Kleinberg,et al.  On Fairness and Calibration , 2017, NIPS.

[21]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[22]  A. Hoffmann Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse , 2019, Information, Communication & Society.

[23]  Esther Rolf,et al.  Delayed Impact of Fair Machine Learning , 2018, ICML.

[24]  David Lyon,et al.  Surveillance as Social Sorting : Privacy, Risk and Automated Discrimination , 2005 .

[25]  Aaron Roth,et al.  Fairness in Reinforcement Learning , 2016, ICML.

[26]  Reuben Binns,et al.  Fairness in Machine Learning: Lessons from Political Philosophy , 2017, FAT.

[27]  Hany Farid,et al.  The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.

[28]  Martha Poon,et al.  Corporate Capitalism and the Growing Power of Big Data , 2016 .

[29]  Fernando Diaz,et al.  Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI , 2016 .

[30]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[31]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[32]  Giovanni Cherubin,et al.  Website Fingerprinting Defenses at the Application Layer , 2017, Proc. Priv. Enhancing Technol..

[33]  Anca D. Dragan,et al.  The Social Cost of Strategic Classification , 2018, FAT.