Blackbox Post-Processing for Multiclass Fairness

Applying standard machine learning approaches for classification can produce unequal results across different demographic groups. When then used in real-world settings, these inequities can have negative societal impacts. This has motivated the development of various approaches to fair classification with machine learning models in recent years. In this paper, we consider the problem of modifying the predictions of a blackbox machine learning classifier in order to achieve fairness in a multiclass setting. To accomplish this, we extend the ’post-processing’ approach in Hardt, Price, and Srebro (2016), which focuses on fairness for binary classification, to the setting of fair multiclass classification. We explore when our approach produces both fair and accurate predictions through systematic synthetic experiments and also evaluate discrimination-fairness tradeoffs on several publicly available real-world application datasets. We find that overall, our approach produces minor drops in accuracy and enforces fairness when the number of individuals in the dataset is high relative to the number of classes and protected groups.

[1]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[2]  Fabio Mendoza Palechor,et al.  Dataset for estimation of obesity levels based on eating habits and physical condition in individuals from Colombia, Peru and Mexico , 2019, Data in brief.

[3]  Karthikeyan Natesan Ramamurthy,et al.  Optimized Score Transformation for Fair Classification , 2019, AISTATS.

[4]  Timnit Gebru,et al.  Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification , 2018, FAT.

[5]  Yiling Chen,et al.  A Short-term Intervention for Long-term Fairness in the Labor Market , 2017, WWW.

[6]  Toon Calders,et al.  Building Classifiers with Independency Constraints , 2009, 2009 IEEE International Conference on Data Mining Workshops.

[7]  Nisarg Shah,et al.  Designing Fairly Fair Classifiers Via Economic Fairness Notions , 2020, WWW.

[8]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[9]  Nathan Srebro,et al.  Learning Non-Discriminatory Predictors , 2017, COLT.

[10]  Yaniv Romano,et al.  Achieving Equalized Odds by Resampling Sensitive Attributes , 2020, NeurIPS.

[11]  Mohamed Hebiri,et al.  Fairness guarantee in multi-class classification , 2021 .

[12]  Max A. Little,et al.  Accurate telemonitoring of Parkinson’s disease progression by non-invasive speech tests , 2009 .

[13]  Linda F. Wightman LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series. , 1998 .

[14]  Blake Lemoine,et al.  Mitigating Unwanted Biases with Adversarial Learning , 2018, AIES.

[15]  Qing Ye,et al.  Unbiased Subdata Selection for Fair Classification: A Unified Framework and Scalable Algorithms , 2020, ArXiv.

[16]  A. Gorban,et al.  The Five Factor Model of personality and evaluation of drug consumption risk , 2015, 1506.06297.