Fairness-Aware Programming

Increasingly, programming tasks involve automating and deploying sensitive decision-making processes that may have adverse impacts on individuals or groups of people. The issue of fairness in automated decision-making has thus become a major problem, attracting interdisciplinary attention. In this work, we aim to make fairness a first-class concern in programming. Specifically, we propose fairness-aware programming, where programmers can state fairness expectations natively in their code, and have a runtime system monitor decision-making and report violations of fairness. We present a rich and general specification language that allows a programmer to specify a range of fairness definitions from the literature, as well as others. As the decision-making program executes, the runtime maintains statistics on the decisions made and incrementally checks whether the fairness definitions have been violated, reporting such violations to the developer. The advantages of this approach are two fold: (i) Enabling declarative mathematical specifications of fairness in the programming language simplifies the process of checking fairness, as the programmer does not have to write ad hoc code for maintaining statistics. (ii) Compared to existing techniques for checking and ensuring fairness, our approach monitors a decision-making program in the wild, which may be running on a distribution that is unlike the dataset on which a classifier was trained and tested. We describe an implementation of our proposed methodology as a library in the Python programming language and illustrate its use on case studies from the algorithmic fairness literature.

[1]  Matt Fredrikson,et al.  Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs , 2017, CCS.

[2]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[3]  Toniann Pitassi,et al.  Learning Fair Representations , 2013, ICML.

[4]  Matt J. Kusner,et al.  Counterfactual Fairness , 2017, NIPS.

[5]  Scott Shenker,et al.  Spark: Cluster Computing with Working Sets , 2010, HotCloud.

[6]  Aws Albarghouthi,et al.  FairSquare: probabilistic verification of program fairness , 2017, Proc. ACM Program. Lang..

[7]  Aws Albarghouthi,et al.  Repairing Decision-Making Programs Under Uncertainty , 2017, CAV.

[8]  Adam Tauman Kalai,et al.  Decoupled Classifiers for Group-Fair and Efficient Machine Learning , 2017, FAT.

[9]  Yuriy Brun,et al.  Fairness testing: testing software for discrimination , 2017, ESEC/SIGSOFT FSE.

[10]  Toon Calders,et al.  Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.

[11]  Yair Zick,et al.  Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).

[12]  Antonio Criminisi,et al.  Measuring Neural Net Robustness with Constraints , 2016, NIPS.

[13]  Dan Grossman,et al.  Expressing and verifying probabilistic assertions , 2014, PLDI.

[14]  Insup Lee,et al.  RT-MaC: runtime monitoring and checking of quantitative and probabilistic properties , 2005, 11th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA'05).

[15]  Insup Lee,et al.  Statistical Runtime Checking of Probabilistic Properties , 2007, RV.

[16]  Roxana Geambasu,et al.  FairTest: Discovering Unwarranted Associations in Data-Driven Applications , 2015, 2017 IEEE European Symposium on Security and Privacy (EuroS&P).

[17]  Esther Rolf,et al.  Delayed Impact of Fair Machine Learning , 2018, ICML.

[18]  Toon Calders,et al.  Classifying without discriminating , 2009, 2009 2nd International Conference on Computer, Control and Communication.

[19]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[20]  W. Hoeffding Probability Inequalities for sums of Bounded Random Variables , 1963 .

[21]  Matt Fredrikson,et al.  Proxy Discrimination∗ in Data-Driven Systems Theory and Experiments with Machine Learnt Programs , 2017 .

[22]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[23]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[24]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[25]  Yair Zick,et al.  Algorithmic Transparency via Quantitative Input Influence , 2017 .