Auditing for Discrimination in Algorithms Delivering Job Ads

Ad platforms such as Facebook, Google and LinkedIn promise value for advertisers through their targeted advertising. However, multiple studies have shown that ad delivery on such platforms can be skewed by gender or race due to hidden algorithmic optimization by the platforms, even when not requested by the advertisers. Building on prior work measuring skew in ad delivery, we develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements. Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race, from skew due to differences in qualification among people in the targeted audience. This distinction is important in U.S. law, where ads may be targeted based on qualifications, but not on protected categories. Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors, such as the ad platform’s optimization for engagement or training its algorithms on biased data. Our method controls for job qualification by comparing ad delivery of two concurrent ads for similar jobs, but for a pair of companies with different de facto gender distributions of employees. We describe the careful statistical tests that establish evidence of non-qualification skew in the results. Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn. We confirm skew by gender in ad delivery on Facebook, and show that it cannot be justified by differences in qualifications. We fail to find skew in ad delivery on LinkedIn. Finally, we suggest improvements to ad platform practices that could make external auditing of their algorithms in the public interest more feasible and accurate.

[1]  Noureddine El Karoui,et al.  Achieving Fairness via Post-Processing in Web-Scale Recommender Systems✱ , 2020, FAccT.

[2]  Piotr Sapiezynski,et al.  Algorithms that "Don't See Color": Measuring Biases in Lookalike and Special Ad Audiences , 2019, AIES.

[3]  Christo Wilson,et al.  Building and Auditing Fair Algorithms: A Case Study in Candidate Screening , 2021, FAccT.

[4]  Elena Spitzer,et al.  What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness , 2020, FAccT.

[5]  Piotr Sapiezynski,et al.  Ad Delivery Algorithms: The Hidden Arbiters of Political Messaging , 2019, WSDM.

[6]  Alan Mislove,et al.  On the Potential for Discrimination via Composition , 2020, Internet Measurement Conference.

[7]  Kamesh Munagala,et al.  Advertising for Demographically Fair Outcomes , 2020, ArXiv.

[8]  Karrie Karahalios,et al.  Auditing Race and Gender Discrimination in Online Housing Markets , 2020, ICWSM.

[9]  Aaron Rieke,et al.  Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination , 2019, FAT*.

[10]  Guy N. Rothblum,et al.  Preference-informed fairness , 2019, ITCS.

[11]  Kobbi Nissim,et al.  Linear Program Reconstruction in Practice , 2018, J. Priv. Confidentiality.

[12]  Catherine E. Tucker,et al.  Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads , 2019, Manag. Sci..

[13]  Sahin Cem Geyik,et al.  Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search , 2019, KDD.

[14]  Cynthia Dwork,et al.  Fairness Under Composition , 2018, ITCS.

[15]  A. Korolova,et al.  Discrimination through Optimization , 2019, Proc. ACM Hum. Comput. Interact..

[16]  Aaron Rieke,et al.  Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .

[17]  Krishna P. Gummadi,et al.  Privacy Risks with Facebook's PII-Based Targeting: Auditing a Data Broker's Advertising Interface , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[18]  Elias Bareinboim,et al.  Fairness in Decision-Making - The Causal Explanation Formula , 2018, AAAI.

[19]  Aleksandra Korolova,et al.  Facebook's Advertising Platform: New Attack Vectors and the Need for Interventions , 2018, ArXiv.

[20]  Krishna P. Gummadi,et al.  Potential for Discrimination in Online Targeted Advertising , 2018, FAT.

[21]  Deirdre K. Mulligan,et al.  Discrimination in Online Personalization: A Multidisciplinary Inquiry , 2018, FAT.

[22]  Thomas Steinke,et al.  Differential Privacy: A Primer for a Non-Technical Audience , 2018 .

[23]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[24]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[25]  Roxana Geambasu,et al.  Sunlight: Fine-grained Targeting Detection at Scale with Statistical Confidence , 2015, CCS.

[26]  Michael Carl Tschantz,et al.  Automated Experiments on Ad Privacy Settings , 2014, Proc. Priv. Enhancing Technol..

[27]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[28]  Karrie Karahalios,et al.  Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms , 2014 .

[29]  Aleksandra Korolova,et al.  Privacy Violations Using Microtargeted Ads: A Case Study , 2010, 2010 IEEE International Conference on Data Mining Workshops.

[30]  Vitaly Shmatikov,et al.  Robust De-anonymization of Large Sparse Datasets , 2008, 2008 IEEE Symposium on Security and Privacy (sp 2008).

[31]  HOUSEHOLD DATA ANNUAL AVERAGES 18. Employed persons by detailed industry, sex, race, and Hispanic or Latino ethnicity , 2007 .