Fairness in ad auctions through inverse proportionality

We study the tradeoff between social welfare maximization and fairness in the context of ad auctions. We study an ad auction setting where users arrive one at a time, $k$ advertisers submit values for each user, and the auction assigns a distribution over ads to each user. Following the works of Dwork and Ilvento (2019) and Chawla et al. (2020), our goal is to design a truthful auction that satisfies "individual fairness" in its outcomes: informally speaking, users that are similar to each other should obtain similar allocations of ads. We express the fairness constraint as a kind of stability condition: any two users that are assigned multiplicatively similar values by all the advertisers must receive additively similar allocations for each advertiser. This value stability constraint is expressed as a function that maps the multiplicative distance between value vectors to the maximum allowable $\ell_\infty$ distance between the corresponding allocations. Standard auctions do not satisfy this kind of value stability. Our main contribution is a new class of allocation algorithms called Inverse Proportional Allocation that achieve value stability with respect to an expressive class of stability conditions. These allocation algorithms are truthful and prior-free, and achieve a constant factor approximation to the optimal (unconstrained) social welfare. In particular, the approximation ratio is independent of the number of advertisers in the system. In this respect, these allocation algorithms greatly surpass the guarantees achieved in previous work. In fact, our algorithms achieve a near optimal tradeoff between fairness and social welfare under a mild assumption on the value stability constraint. We also extend our results to broader notions of fairness that we call subset fairness.

[1]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[2]  Alkmini Sgouritsa,et al.  On the Efficiency of the Proportional Allocation Mechanism for Divisible Resources , 2016, Theory of Computing Systems.

[3]  John N. Tsitsiklis,et al.  Efficiency loss in a network resource allocation game: the case of elastic supply , 2004, IEEE Transactions on Automatic Control.

[4]  Guy N. Rothblum,et al.  Preference-informed fairness , 2019, ITCS.

[5]  Meena Jagadeesan,et al.  Multi-category fairness in sponsored search auctions , 2019, FAT*.

[6]  Yiling Chen,et al.  Fair classification and social welfare , 2019, FAT*.

[7]  Krishna P. Gummadi,et al.  From Parity to Preference-based Notions of Fairness in Classification , 2017, NIPS.

[8]  Catherine E. Tucker,et al.  Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads , 2019, Manag. Sci..

[9]  Ioannis Caragiannis,et al.  Welfare Guarantees for Proportional Allocations , 2014, SAGT.

[10]  Ioannis Caragiannis,et al.  The Efficiency of Fair Division , 2009, Theory of Computing Systems.

[11]  Maria-Florina Balcan,et al.  Envy-Free Classification , 2018, NeurIPS.

[12]  Dimitris Bertsimas,et al.  The Price of Fairness , 2011, Oper. Res..

[13]  Nisheeth K. Vishnoi,et al.  Toward Controlling Discrimination in Online Ad Auctions , 2019, ICML.

[14]  Cynthia Dwork,et al.  Fairness Under Composition , 2018, ITCS.

[15]  Michael Carl Tschantz,et al.  Bidding strategies with gender nondiscrimination constraints for online ad auctions , 2019, FAT*.

[16]  Piotr Sapiezynski,et al.  Discrimination through Optimization , 2019, Proc. ACM Hum. Comput. Interact..