Prediction without Preclusion: Recourse Verification with Reachable Sets

Machine learning models are often used to decide who will receive a loan, a job interview, or a public benefit. Standard techniques to build these models use features about people but overlook their actionability. In turn, models can assign predictions that are fixed, meaning that consumers who are denied loans, interviews, or benefits may be permanently locked out from access to credit, employment, or assistance. In this work, we introduce a formal testing procedure to flag models that assign fixed predictions that we call recourse verification. We develop machinery to reliably determine if a given model can provide recourse to its decision subjects from a set of user-specified actionability constraints. We demonstrate how our tools can ensure recourse and adversarial robustness in real-world datasets and use them to study the infeasibility of recourse in real-world lending datasets. Our results highlight how models can inadvertently assign fixed predictions that permanently bar access, and we provide tools to design algorithms that account for actionability when developing models.

[1]  C. Marsala,et al.  Achieving Diversity in Counterfactual Explanations: a Review and Discussion , 2023, FAccT.

[2]  Duen Horng Chau,et al.  GAM Coach: Towards Interactive and User-centered Algorithmic Recourse , 2023, CHI.

[3]  Ngoc H. Bui,et al.  Feasible Recourse Plan via Diverse Interpolation , 2023, AISTATS.

[4]  Eric V. Mazumdar,et al.  Algorithmic Collective Action in Machine Learning , 2023, ICML.

[5]  Cynthia C. S. Liem,et al.  Endogenous Macrodynamics in Algorithmic Recourse , 2023, 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML).

[6]  M. Grosse-Wentrup,et al.  Improvement-Focused Causal Recourse (ICR) , 2022, AAAI.

[7]  C. Troncoso,et al.  Adversarial Robustness for Tabular Data through Cost and Utility Awareness , 2022, NDSS.

[8]  Niki Kilbertus,et al.  Learning Counterfactually Invariant Predictors , 2022, ArXiv.

[9]  A. Squicciarini,et al.  RoCourseNet: Robust Training of a Prediction Aware Recourse Model , 2022, CIKM.

[10]  Andrew O'Brien,et al.  Toward Multi-Agent Algorithmic Recourse Challenges From a Game-Theoretic Perspective , 2022, FLAIRS.

[11]  A. Shabtai,et al.  Not all datasets are born equal: On heterogeneous tabular data and adversarial examples , 2022, Knowl. Based Syst..

[12]  B. Scholkopf,et al.  On the Adversarial Robustness of Causal Algorithmic Recourse , 2021, ICML.

[13]  Zhiwei Steven Wu,et al.  Bayesian Persuasion for Algorithmic Recourse , 2021, NeurIPS.

[14]  Mohit Bansal,et al.  Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions , 2021, ArXiv.

[15]  Thibaut Vidal,et al.  Optimal Counterfactual Explanations in Tree Ensembles , 2021, ICML.

[16]  John P. Dickerson,et al.  Amortized Generation of Sequential Algorithmic Recourses for Black-Box Models , 2021, AAAI.

[17]  Zhiwei Steven Wu,et al.  Stateful Strategic Regression , 2021, NeurIPS.

[18]  Alexander D'Amour,et al.  Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests , 2021, ArXiv.

[19]  Davis W. Blalock,et al.  Causally motivated shortcut removal using auxiliary labels , 2021, AISTATS.

[20]  Nir Rosenfeld,et al.  Strategic Classification Made Practical , 2021, ICML.

[21]  Himabindu Lakkaraju,et al.  Towards Robust and Reliable Algorithmic Recourse , 2021, NeurIPS.

[22]  Inbal Talgam-Cohen,et al.  Strategic Classification in the Dark , 2021, ICML.

[23]  Timo Berthold,et al.  MIPLIB 2017: data-driven compilation of the 6th mixed-integer programming library , 2021, Mathematical Programming Computation.

[24]  Ece Kamar,et al.  Algorithmic Recourse in the Wild: Understanding the Impact of Data and Model Shifts , 2020, 2012.11788.

[25]  Sicco Verwer,et al.  Efficient Training of Robust Decision Trees Against Adversarial Examples , 2020, ICML.

[26]  Himabindu Lakkaraju,et al.  Learning Models for Actionable Recourse , 2020, NeurIPS.

[27]  Yang Liu,et al.  Strategic Recourse in Linear Classification , 2020, ArXiv.

[28]  John P. Dickerson,et al.  Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review , 2020, 2010.10596.

[29]  Julius von Kügelgen,et al.  On the Fairness of Causal Algorithmic Recourse , 2020, AAAI.

[30]  Bernhard Schölkopf,et al.  A survey of algorithmic recourse: definitions, formulations, solutions, and prospects , 2020, ArXiv.

[31]  M. Gilman Poverty Lawgorithms: A Poverty Lawyer’s Guide to Fighting Automated Decision-Making Harms on Low-Income Communities , 2020 .

[32]  Huamin Qu,et al.  DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models , 2020, IEEE Transactions on Visualization and Computer Graphics.

[33]  Julius von Kügelgen,et al.  Algorithmic recourse under imperfect causal knowledge: a probabilistic approach , 2020, NeurIPS.

[34]  Inbal Talgam-Cohen,et al.  Multiagent Evaluation Mechanisms , 2020, AAAI.

[35]  Benjamin L. Edelman,et al.  Causal Strategic Linear Regression , 2020, ICML.

[36]  Bernhard Schölkopf,et al.  Algorithmic Recourse: from Counterfactual Explanations to Interventions , 2020, FAccT.

[37]  Mark Alfano,et al.  The philosophical basis of algorithmic recourse , 2020, FAT*.

[38]  Solon Barocas,et al.  The hidden assumptions behind counterfactual explanations and principal reasons , 2019, FAT*.

[39]  Amit Sharma,et al.  Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers , 2019, ArXiv.

[40]  Moritz Hardt,et al.  Strategic Classification is Causal Modeling in Disguise , 2019, ICML.

[41]  Aws Albarghouthi,et al.  Synthesizing Action Sequences for Modifying Model Decisions , 2019, AAAI.

[42]  Peter A. Flach,et al.  FACE: Feasible and Actionable Counterfactual Explanations , 2019, AIES.

[43]  Suresh Venkatasubramanian,et al.  Equalizing Recourse across Groups , 2019, ArXiv.

[44]  Oluwasanmi Koyejo,et al.  Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems , 2019, ArXiv.

[45]  Claudio Lucchese,et al.  Treant: training evasion-aware decision trees , 2019, Data Mining and Knowledge Discovery.

[46]  J. Kleinberg,et al.  Mitigating bias in algorithmic hiring: evaluating claims and practices , 2019, FAT*.

[47]  Amir-Hossein Karimi,et al.  Model-Agnostic Counterfactual Explanations for Consequential Decisions , 2019, AISTATS.

[48]  Amit Sharma,et al.  Explaining machine learning classifiers through diverse counterfactual explanations , 2019, FAT*.

[49]  Krishna P. Gummadi,et al.  On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning , 2019, ICML.

[50]  Niki Kilbertus,et al.  Improving Consequential Decision Making under Imperfect Predictions , 2019, ArXiv.

[51]  Chris Russell,et al.  Efficient Search for Diverse Coherent Explanations , 2019, FAT.

[52]  Aaron Rieke,et al.  Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .

[53]  Hannah Lebovits Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , 2018, Public Integrity.

[54]  Yang Liu,et al.  Actionable Recourse in Linear Classification , 2018, FAT.

[55]  Anca D. Dragan,et al.  The Social Cost of Strategic Classification , 2018, FAT.

[56]  Jon M. Kleinberg,et al.  How Do Classifiers Induce Agents to Invest Effort Strategically? , 2018, EC.

[57]  Alexandra Chouldechova,et al.  Learning under selective labels in the presence of expert consistency , 2018, ArXiv.

[58]  Michael Granitzer,et al.  Sequence classification for credit-card fraud detection , 2018, Expert Syst. Appl..

[59]  Zhiwei Steven Wu,et al.  Strategic Classification from Revealed Preferences , 2017, EC.

[60]  J. Leskovec,et al.  The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables , 2017, KDD.

[61]  Jon Crowcroft,et al.  Classification of Twitter Accounts into Automated Agents and Human Users , 2017, 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

[62]  Anthony F. Heath,et al.  Equality of Opportunity , 2017 .

[63]  Fabrizio Silvestri,et al.  Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking , 2017, KDD.

[64]  Matthias Hein,et al.  Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation , 2017, NIPS.

[65]  Julius Adebayo,et al.  CREDIT SCORING IN THE ERA OF BIG DATA , 2017 .

[66]  J. Doug Tygar,et al.  Evasion and Hardening of Tree Ensemble Classifiers , 2015, ICML.

[67]  Yixin Chen,et al.  Optimal Action Extraction for Random Forests and Boosted Trees , 2015, KDD.

[68]  Christos H. Papadimitriou,et al.  Strategic Classification , 2015, ITCS.

[69]  Robert E. Bixby,et al.  Progress in computational mixed integer programming—A look back from the other side of the tipping point , 2007, Ann. Oper. Res..

[70]  Christopher Meek,et al.  Adversarial learning , 2005, KDD '05.

[71]  N. Daniels Equity of access to health care: some conceptual and ethical issues. , 1982, The Milbank Memorial Fund quarterly. Health and society.

[72]  Himabindu Lakkaraju,et al.  Algorithmic Recourse in the Face of Noisy Human Responses , 2022, ArXiv.

[73]  Ken Kobayashi,et al.  Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees , 2022, AISTATS.

[74]  Dennis Wei,et al.  Decision-Making Under Selective Labels: Optimal Finite-Domain Policies and Beyond , 2021, ICML.

[75]  Jared Nambwenya,et al.  Give Me Some Credit , 2014 .

[76]  Robert E. Bixby,et al.  Mixed-Integer Programming: A Progress Report , 2004, The Sharpest Cut.