Algorithmic Greenlining: An Approach to Increase Diversity

In contexts such as college admissions, hiring, and image search, decision-makers often aspire to formulate selection criteria that yield both high-quality and diverse results. However, simultaneously optimizing for quality and diversity can be challenging, especially when the decision-maker does not know the true quality of any criterion and instead must rely on heuristics and intuition. We introduce an algorithmic framework that takes as input a user's selection criterion, which may yield high-quality but homogeneous results. Using an application-specific notion of substitutability, our algorithms suggest similar criteria with more diverse results, in the spirit of statistical or demographic parity. For instance, given the image search query "chairman", it suggests alternative queries which are similar but more gender-diverse, such as "chairperson". In the context of college admissions, we apply our algorithm to a dataset of students' applications and rediscover Texas's "top 10% rule": the input criterion is an ACT score cutoff, and the output is a class rank cutoff, automatically accepting the students in the top decile of their graduating class. Historically, this policy has been effective in admitting students who perform well in college and come from diverse backgrounds. We complement our empirical analysis with learning-theoretic guarantees for estimating the true diversity of any criterion based on historical data.

[1]  Peter L. Bartlett,et al.  Neural Network Learning - Theoretical Foundations , 1999 .

[2]  Ricardo Baeza-Yates,et al.  FA*IR: A Fair Top-k Ranking Algorithm , 2017, CIKM.

[3]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[4]  Krishna P. Gummadi,et al.  Equity of Attention: Amortizing Individual Fairness in Rankings , 2018, SIGIR.

[5]  Mor Naaman,et al.  Generating diverse and representative image search results for landmarks , 2008, WWW.

[6]  Vladimir Vapnik,et al.  Chervonenkis: On the uniform convergence of relative frequencies of events to their probabilities , 1971 .

[7]  J. Lamperti ON CONVERGENCE OF STOCHASTIC PROCESSES , 1962 .

[8]  Putra Manggala,et al.  Using Image Fairness Representations in Diversity-Based Re-ranking for Recommendations , 2018, UMAP.

[9]  Krishna P. Gummadi,et al.  Fairness Constraints: Mechanisms for Fair Classification , 2015, AISTATS.

[10]  Nisheeth K. Vishnoi,et al.  Fair and Diverse DPP-based Data Summarization , 2018, ICML.

[11]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[12]  John Langford,et al.  A Reductions Approach to Fair Classification , 2018, ICML.

[13]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[14]  Benjamin Rey,et al.  Generating query substitutions , 2006, WWW '06.

[15]  Norbert Sauer,et al.  On the Density of Families of Sets , 1972, J. Comb. Theory A.

[16]  Sean A. Munson,et al.  Unequal Representation and Gender Stereotypes in Image Search Results for Occupations , 2015, CHI.

[17]  Silvio Lattanzi,et al.  Fair Clustering Through Fairlets , 2018, NIPS.

[18]  R. Dudley The Sizes of Compact Subsets of Hilbert Space and Continuity of Gaussian Processes , 1967 .

[19]  Adam Tauman Kalai,et al.  Decoupled Classifiers for Group-Fair and Efficient Machine Learning , 2017, FAT.

[20]  Adam Tauman Kalai,et al.  Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings , 2016, NIPS.

[21]  Xian-Sheng Hua,et al.  Towards a Relevant and Diverse Search of Social Images , 2010, IEEE Transactions on Multimedia.

[22]  Paul D. Clough,et al.  Competent Men and Warm Women: Gender Stereotypes and Backlash in Image Search Results , 2017, CHI.

[23]  Julia Stoyanovich,et al.  Measuring Fairness in Ranked Outputs , 2016, SSDBM.

[24]  Roland G. Fryer,et al.  An Economic Analysis of Color-Blind Affirmative Action , 2007 .

[25]  Abolfazl Asudeh,et al.  A Nutritional Label for Rankings , 2018, SIGMOD Conference.

[26]  Thorsten Joachims,et al.  Fairness of Exposure in Rankings , 2018, KDD.

[27]  Nisheeth K. Vishnoi,et al.  Ranking with Fairness Constraints , 2017, ICALP.