Actionable Recourse in Linear Classification

Classification models are often used to make decisions that affect humans: whether to approve a loan application, extend a job offer, or provide insurance. In such applications, individuals should have the ability to change the decision of the model. When a person is denied a loan by a credit scoring model, for example, they should be able to change the input variables of the model in a way that will guarantee approval. Otherwise, this person will be denied the loan so long as the model is deployed, and -- more importantly --will lack agency over a decision that affects their livelihood. In this paper, we propose to evaluate a linear classification model in terms of recourse, which we define as the ability of a person to change the decision of the model through actionable input variables (e.g., income vs. age or marital status). We present an integer programming toolkit to: (i) measure the feasibility and difficulty of recourse in a target population; and (ii) generate a list of actionable changes for a person to obtain a desired outcome. We discuss how our tools can inform different stakeholders by using them to audit recourse for credit scoring models built with real-world datasets. Our results illustrate how recourse can be significantly affected by common modeling practices, and motivate the need to evaluate recourse in algorithmic decision-making.

[1]  I-Cheng Yeh,et al.  The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients , 2009, Expert Syst. Appl..

[2]  Anind K. Dey,et al.  Assessing demand for intelligibility in context-aware applications , 2009, UbiComp.

[3]  Margo I. Seltzer,et al.  Learning Certifiably Optimal Rule Lists , 2017, KDD.

[4]  K. McKeown,et al.  Justification Narratives for Individual Classifications , 2014 .

[5]  David Weinberger,et al.  Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.

[6]  Christos H. Papadimitriou,et al.  Strategic Classification , 2015, ITCS.

[7]  Carlos Guestrin,et al.  Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.

[8]  Pascal Frossard,et al.  Analysis of classifiers’ robustness to adversarial perturbations , 2015, Machine Learning.

[9]  Ifeoma Ajunwa,et al.  The Paradox of Automation as Anti-Bias Intervention , 2016 .

[10]  Cynthia Rudin,et al.  How to reverse-engineer quality rankings , 2012, Machine Learning.

[11]  Sameer Singh,et al.  “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.

[12]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[13]  Nathan Kallus,et al.  Residual Unfairness in Fair Machine Learning from Prejudiced Data , 2018, ICML.

[14]  Anca D. Dragan,et al.  The Social Cost of Strategic Classification , 2018, FAT.

[15]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[16]  Sorelle A. Friedler,et al.  Hiring by Algorithm: Predicting and Preventing Disparate Impact , 2016 .

[17]  M. I. V. Eale,et al.  SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR , 2017 .

[18]  Kathleen R. McKeown,et al.  Justication Narratives for Individual Classications , 2014 .

[19]  Matteo Fischetti,et al.  On handling indicator constraints in mixed integer programming , 2016, Comput. Optim. Appl..

[20]  Naeem Siddiqi,et al.  Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring , 2005 .

[21]  Anca D. Dragan,et al.  Model Reconstruction from Model Explanations , 2018, FAT.

[22]  Nicole Immorlica,et al.  The Disparate Effects of Strategic Manipulation , 2018, FAT.

[23]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[24]  Aaron Rieke,et al.  Help wanted: an examination of hiring algorithms, equity, and bias , 2018 .

[25]  Matt Fredrikson,et al.  Supervising Feature Influence , 2018, ArXiv.

[26]  Chen Yang,et al.  10-year CVD risk prediction and minimization via InverseClassification , 2012, IHI '12.

[27]  Jared Nambwenya,et al.  Give Me Some Credit , 2014 .

[28]  Apurv Jain Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017 .

[29]  Solon Barocas,et al.  The Intuitive Appeal of Explainable Machines , 2018 .

[30]  Winnie F. Taylor Meeting the Equal Credit Opportunity Act's Specificity Requirement: Judgmental and Statistical Scoring Systems , 1980 .

[31]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[32]  Duane Szafron,et al.  Visual Explanation of Evidence with Additive Classifiers , 2006, AAAI.

[33]  Charu C. Aggarwal,et al.  The Inverse Classification Problem , 2010, Journal of Computer Science and Technology.

[34]  Jon M. Kleinberg,et al.  How Do Classifiers Induce Agents to Invest Effort Strategically? , 2018, EC.

[35]  Frank A. Pasquale,et al.  [89WashLRev0001] The Scored Society: Due Process for Automated Predictions , 2014 .

[36]  Cynthia Rudin,et al.  Optimized Risk Scores , 2017, KDD.

[37]  Nataliya Sokolovska,et al.  A Provable Algorithm for Learning Interpretable Scoring Systems , 2018, AISTATS.

[38]  Ravi Shroff,et al.  Predictive Analytics for City Agencies: Lessons from Children's Services , 2017, Big Data.

[39]  Mignon M. Arrington Establishing Appropriate Liability under the Fair and Accurate Credit Transactions Act , 2011 .

[40]  MartensDavid,et al.  Explaining data-driven document classifications , 2014 .

[41]  Krishna P. Gummadi,et al.  Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction , 2018, WWW.

[42]  Aaron Roth,et al.  Strategic Classification from Revealed Preferences , 2017, EC.

[43]  Takanori Maehara,et al.  Maximally Invariant Data Perturbation as Explanation , 2018, ArXiv.

[44]  J. A. Tomlin,et al.  Special ordered sets and an application to gas supply operations planning , 1988, Math. Program..

[45]  Alexandra Chouldechova,et al.  A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.

[46]  Deborah G. Johnson,et al.  Data Retention and the Panoptic Society: The Social Benefits of Forgetfulness , 2002, Inf. Soc..

[47]  Yair Zick,et al.  Axiomatic Characterization of Data-Driven Influence Measures for Classification , 2017, AAAI.

[48]  Kush R. Varshney,et al.  Exact Rule Learning via Boolean Compressed Sensing , 2013, ICML.

[49]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[50]  K. Crawford,et al.  Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms , 2013 .

[51]  Sandra Wachter,et al.  A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI , 2018 .

[52]  Cynthia Rudin,et al.  Supersparse linear integer models for optimized medical scoring systems , 2015, Machine Learning.

[53]  Cathy O'Neil,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2016 .