Who Leads and Who Follows in Strategic Classification?

As predictive models are deployed into the real world, they must increasingly contend with strategic behavior. A growing body of work on strategic classification treats this problem as a Stackelberg game: the decision-maker “leads” in the game by deploying a model, and the strategic agents “follow” by playing their best response to the deployed model. Importantly, in this framing, the burden of learning is placed solely on the decision-maker, while the agents’ best responses are implicitly treated as instantaneous. In this work, we argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other’s actions. In particular, by generalizing the standard model to allow both players to learn over time, we show that a decision-maker that makes updates faster than the agents can reverse the order of play, meaning that the agents lead and the decision-maker follows. We observe in standard learning settings that such a role reversal can be desirable for both the decision-maker and the strategic agents. Finally, we show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.

[1]  Brian Axelrod,et al.  Causal Strategic Linear Regression , 2020, ICML.

[2]  Adam Tauman Kalai,et al.  Online convex optimization in the bandit setting: gradient descent without a gradient , 2004, SODA '05.

[3]  David S. Leslie,et al.  Bandit learning in concave $N$-person games , 2018, 1810.01925.

[4]  Celestine Mendler-Dünner,et al.  Alternative Microfoundations for Strategic Classification , 2021, ICML.

[5]  Lexing Ying,et al.  How to Learn when Data Reacts to Your Model: Performative Gradient Descent , 2021, ICML.

[6]  Zhiwei Steven Wu,et al.  Gaming Helps! Learning from Strategic Interactions in Natural Dynamics , 2021, AISTATS.

[7]  Tor Lattimore,et al.  Improved Regret for Zeroth-Order Stochastic Convex Bandits , 2021, COLT.

[8]  Michael I. Jordan,et al.  What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? , 2019, ICML.

[9]  Tim Roughgarden,et al.  Algorithmic Game Theory , 2007 .

[10]  Aaron Roth,et al.  Strategic Classification from Revealed Preferences , 2017, EC.

[11]  Heinrich von Stackelberg Market Structure and Equilibrium , 2010 .

[12]  Tobias Scheffer,et al.  Static prediction games for adversarial learning problems , 2012, J. Mach. Learn. Res..

[13]  M. Strathern ‘Improving ratings’: audit in the British University system , 1997, European Review.

[14]  Martin Zinkevich,et al.  Online Convex Programming and Generalized Infinitesimal Gradient Ascent , 2003, ICML.

[15]  S. Shankar Sastry,et al.  On Gradient-Based Learning in Continuous Games , 2018, SIAM J. Math. Data Sci..

[16]  Mark Braverman,et al.  The Role of Randomness and Noise in Strategic Classification , 2020, FORC.

[17]  Pedro M. Domingos,et al.  Adversarial classification , 2004, KDD.

[18]  Mark W. Schmidt,et al.  Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition , 2016, ECML/PKDD.

[19]  Christos H. Papadimitriou,et al.  Strategic Classification , 2015, ITCS.

[20]  Jon M. Kleinberg,et al.  How Do Classifiers Induce Agents to Invest Effort Strategically? , 2018, EC.

[21]  James Hannan,et al.  4. APPROXIMATION TO RAYES RISK IN REPEATED PLAY , 1958 .

[22]  Nir Rosenfeld,et al.  Strategic Classification Made Practical , 2021, ICML.

[23]  Avrim Blum,et al.  Routing without regret: on convergence to nash equilibria of regret-minimizing algorithms in routing games , 2006, PODC '06.

[24]  J. Tsitsiklis,et al.  Convergence rate of linear two-time-scale stochastic approximation , 2004, math/0405287.

[25]  S. Hart,et al.  A simple adaptive procedure leading to correlated equilibrium , 2000 .

[26]  Nicole Immorlica,et al.  Maximizing Welfare with Incentive-Aware Evaluation Mechanisms , 2020, IJCAI.

[27]  Shlomi Hod,et al.  Performative Prediction in a Stateful World , 2020, ArXiv.

[28]  Lior Zalmanson,et al.  Hands on the Wheel: Navigating Algorithmic Management and Uber Drivers' Autonomy , 2017, ICIS.

[29]  M. Benaïm Dynamics of stochastic approximation algorithms , 1999 .

[30]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..

[31]  Kelley Cotter,et al.  Playing the visibility game: How digital influencers and algorithms negotiate influence on Instagram , 2018, New Media Soc..

[32]  Celestine Mendler-Dünner,et al.  Performative Prediction , 2020, ICML.

[33]  Lillian J. Ratliff,et al.  Local Convergence Analysis of Gradient Descent Ascent with Finite Timescale Separation , 2021, ICLR.

[34]  Vincent Conitzer,et al.  Incentive-Aware PAC Learning , 2021, AAAI.

[35]  Sham M. Kakade,et al.  Stochastic Convex Optimization with Bandit Feedback , 2011, SIAM J. Optim..

[36]  Masaki Aoyagi Reputation and Dynamic Stackelberg Leadership in Infinitely Repeated Games , 1996 .

[37]  Ian Ball Scoring Strategic Agents , 2019 .

[38]  Mohammad Taghi Hajiaghayi,et al.  Regret minimization and the price of total anarchy , 2008, STOC.

[39]  Tanner Fiez,et al.  Implicit Learning Dynamics in Stackelberg Games: Equilibria Characterization, Convergence Analysis, and Empirical Study , 2020, ICML.

[40]  V. Borkar Stochastic Approximation: A Dynamical Systems Viewpoint , 2008, Texts and Readings in Mathematics.

[41]  Inbal Talgam-Cohen,et al.  Strategic Classification in the Dark , 2021, ICML.

[42]  Dean P. Foster,et al.  Calibrated Learning and Correlated Equilibrium , 1997 .

[43]  Celestine Mendler-Dünner,et al.  Stochastic Optimization for Performative Prediction , 2020, NeurIPS.

[44]  Tijana Zrnic,et al.  Outside the Echo Chamber: Optimizing the Performative Risk , 2021, ICML.

[45]  Moritz Hardt,et al.  Strategic Classification is Causal Modeling in Disguise , 2019, ICML.

[46]  Nicole Immorlica,et al.  The Disparate Effects of Strategic Manipulation , 2018, FAT.

[47]  Yiling Chen,et al.  Learning Strategy-Aware Linear Classifiers , 2019, NeurIPS.

[48]  Yin Tat Lee,et al.  Kernel-based methods for bandit convex optimization , 2016, STOC.

[49]  Avrim Blum,et al.  The Strategic Perceptron , 2020, ArXiv.

[50]  Anca D. Dragan,et al.  The Social Cost of Strategic Classification , 2018, FAT.