PAC-Bayes and Fairness: Risk and Fairness Bounds on Distribution Dependent Fair Priors

We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairly influence the outcome of a classifier. We face this issue in the PAC-Bayes framework and we present an approach which trades off and bounds the risk and the fairness of the Gibbs Classifier measured with respect to different state-of-the-art fairness measures. For this purpose, we further develop the idea that the PAC-Bayes prior can be defined based on the data-generating distribution without actually needing to know it. In particular, we define a prior and a posterior which gives more weight to functions which exhibit good generalization and fairness properties.