Identifying Significant Predictive Bias in Classifiers

We present a novel subset scan method to detect if a probabilistic binary classifier has statistically significant bias -- over or under predicting the risk -- for some subgroup, and identify the characteristics of this subgroup. This form of model checking and goodness-of-fit test provides a way to interpretably detect the presence of classifier bias or regions of poor classifier fit. This allows consideration of not just subgroups of a priori interest or small dimensions, but the space of all possible subgroups of features. To address the difficulty of considering these exponentially many possible subgroups, we use subset scan and parametric bootstrap-based methods. Extending this method, we can penalize the complexity of the detected subgroup and also identify subgroups with high classification errors. We demonstrate these methods and find interesting results on the COMPAS crime recidivism and credit delinquency data.

[1]  R. Tibshirani,et al.  A LASSO FOR HIERARCHICAL INTERACTIONS. , 2012, Annals of statistics.

[2]  Tarun Kumar,et al.  Fast Multidimensional Subset Scan for Outbreak Detection and Characterization , 2013, Online Journal of Public Health Informatics.

[3]  Suresh Venkatasubramanian,et al.  Auditing Black-box Models by Obscuring Features , 2016, ArXiv.

[4]  M. Kenward,et al.  An Introduction to the Bootstrap , 2007 .

[5]  C. Moskowitz Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. , 2016 .

[6]  Salvatore Ruggieri,et al.  A multidisciplinary survey on discrimination analysis , 2013, The Knowledge Engineering Review.

[7]  Christopher T. Lowenkamp,et al.  RISK, RACE, AND RECIDIVISM: PREDICTIVE BIAS AND DISPARATE IMPACT*: RISK, RACE, AND RECIDIVISM , 2016 .

[8]  Cathy O'Neil,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2016, Vikalpa: The Journal for Decision Makers.

[9]  Christopher T. Lowenkamp,et al.  False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks" , 2016 .

[10]  Daniel B. Neill,et al.  Fast subset scan for spatial pattern detection , 2012 .

[11]  M. Yuan,et al.  Model selection and estimation in regression with grouped variables , 2006 .

[12]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[13]  S. T. Buckland,et al.  An Introduction to the Bootstrap. , 1994 .

[14]  Justin M. Rao,et al.  Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy , 2016 .

[15]  Alexandra Chouldechova,et al.  Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.

[16]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[17]  D. Neill,et al.  Penalized Fast Subset Scanning , 2016 .

[18]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[19]  Rajen Dinesh Shah,et al.  Goodness‐of‐fit tests for high dimensional linear models , 2015, Journal of the Royal Statistical Society: Series B (Statistical Methodology).

[20]  Sentencing by Numbers , 1993 .

[21]  Jennifer L. Skeem,et al.  Risk, Race, & Recidivism: Predictive Bias and Disparate Impact , 2016 .

[22]  Peter Bühlmann,et al.  Goodness of fit tests for high-dimensional models , 2015 .

[23]  Daniel B. Neill,et al.  Fast subset scan for multivariate event detection , 2013, Statistics in medicine.