How Do Fair Decisions Fare in Long-term Qualification?

Although many fairness criteria have been proposed for decision making, their long-term impact on the well-being of a population remains unclear. In this work, we study the dynamics of population qualification and algorithmic decisions under a partially observed Markov decision problem setting. By characterizing the equilibrium of such dynamics, we analyze the long-term impact of static fairness constraints on the equality and improvement of group well-being. Our results show that static fairness constraints can either promote equality or exacerbate disparity depending on the driving factor of qualification transitions and the effect of sensitive attributes on feature distributions. We also consider possible interventions that can effectively improve group qualification or promote equality of group qualification. Our theoretical results and experiments on static real-world datasets with simulated dynamics show that our framework can be used to facilitate social science studies.

[1]  Avi Feller,et al.  Algorithmic Decision Making and the Cost of Fairness , 2017, KDD.

[2]  Yang Liu,et al.  Calibrated Fairness in Bandits , 2017, ArXiv.

[3]  Ilya Shpitser,et al.  Learning Optimal Fair Policies , 2018, ICML.

[4]  Adam Tauman Kalai,et al.  The disparate equilibria of algorithmic decision making when individuals invest rationally , 2019, FAT*.

[5]  Christopher A. Mallett Disproportionate minority contact in juvenile justice: today’s, and yesterdays, problems , 2018 .

[6]  Yang Liu,et al.  Fair Bandit Learning with Delayed Impact of Actions , 2020, ArXiv.

[7]  Yang Liu,et al.  Bayesian Fairness , 2019, AAAI.

[8]  Nathan Kallus,et al.  Residual Unfairness in Fair Machine Learning from Prejudiced Data , 2018, ICML.

[9]  Mingyan Liu,et al.  Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness , 2019, NeurIPS.

[10]  Sampath Kannan,et al.  Downstream Effects of Affirmative Action , 2018, FAT.

[11]  Barbara E. Engelhardt,et al.  How algorithmic confounding in recommendation systems increases homogeneity and decreases utility , 2017, RecSys.

[12]  Suresh Venkatasubramanian,et al.  Decision making with limited feedback , 2018, ALT.

[13]  Abigail B. Sussman,et al.  Does Knowing Your FICO Score Change Financial Behavior? Evidence from a Field Experiment with Student Loan Borrowers , 2019, Review of Economics and Statistics.

[14]  Christopher T. Lowenkamp,et al.  False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks" , 2016 .

[15]  Douglas H. Graham,et al.  Modeling Group Loan Repayment Behavior: New Insights from Burkina Faso , 2000, Economic Development and Cultural Change.

[16]  Krikamol Muandet,et al.  Fair Decisions Despite Imperfect Predictions , 2019, AISTATS.

[17]  Abhay P. Aneja,et al.  No Credit For Time Served? Incarceration and Credit-Driven Crime Cycles∗ , 2019 .

[18]  Esther Rolf,et al.  Delayed Impact of Fair Machine Learning , 2018, ICML.

[19]  Bodhisattva Sen,et al.  Estimation of a two‐component mixture model with applications to multiple testing , 2012, 1204.5488.

[20]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[21]  Seth Neel,et al.  Meritocratic Fairness for Infinite and Contextual Bandits , 2018, AIES.

[22]  Aaron Roth,et al.  Fairness in Reinforcement Learning , 2016, ICML.

[23]  Jia Liu,et al.  Combinatorial Sleeping Bandits with Fairness Constraints , 2019, IEEE INFOCOM 2019 - IEEE Conference on Computer Communications.

[24]  Suresh Venkatasubramanian,et al.  Runaway Feedback Loops in Predictive Policing , 2017, FAT.

[25]  Nathan Srebro,et al.  From Fair Decision Making To Social Equality , 2018, FAT.

[26]  Aaron Roth,et al.  Fairness in Learning: Classic and Contextual Bandits , 2016, NIPS.

[27]  I R James Estimation of the mixing proportion in a mixture of two normal distributions from simple, rapid measurements. , 1978, Biometrics.

[28]  J. Zico Kolter,et al.  Dynamic Modeling and Equilibria in Fair Decision Making , 2019, ArXiv.

[29]  Percy Liang,et al.  Fairness Without Demographics in Repeated Loss Minimization , 2018, ICML.

[30]  Nicole Immorlica,et al.  The Disparate Effects of Strategic Manipulation , 2018, FAT.

[31]  Krishna P. Gummadi,et al.  On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning , 2019, ICML.

[32]  Alexander D'Amour,et al.  Fairness is not static: deeper understanding of long term fairness via simulation studies , 2020, FAT*.

[33]  Mingyan Liu,et al.  Long-Term Impacts of Fair Machine Learning , 2019, Ergonomics in Design: The Quarterly of Human Factors Applications.

[34]  Aaron Roth,et al.  Equal Opportunity in Online Classification with Partial Feedback , 2019, NeurIPS.

[35]  Ernst Fehr,et al.  A Behavioral Account of the Labor Market: The Role of Fairness Concerns , 2008 .

[36]  Haipeng Luo,et al.  Fair Contextual Multi-Armed Bandits: Theory and Experiments , 2019, UAI.

[37]  Kevin Fox Gotham,et al.  Race, Mortgage Lending and Loan Rejections in a U.S. City , 1998 .

[38]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[39]  Mingyan Liu,et al.  Fairness in Learning-Based Sequential Decision Algorithms: A Survey , 2020, Handbook of Reinforcement Learning and Control.

[40]  Christopher Jung,et al.  Online Learning with an Unknown Fairness Metric , 2018, NeurIPS.

[41]  Y. Narahari,et al.  Achieving Fairness in the Stochastic Multi-armed Bandit Problem , 2019, AAAI.

[42]  Paul Goldsmith-Pinkham,et al.  Predictably Unequal? The Effects of Machine Learning on Credit Markets , 2017, The Journal of Finance.

[43]  Andreas Krause,et al.  Preventing Disparate Treatment in Sequential Decision Making , 2018, IJCAI.