Runaway Feedback Loops in Predictive Policing

Predictive policing systems are increasingly used to determine how to allocate police across a city in order to best prevent crime. Discovered crime data (e.g., arrest counts) are used to help update the model, and the process is repeated. Such systems have been empirically shown to be susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. In response, we develop a mathematical model of predictive policing that proves why this feedback loop occurs, show empirically that this model exhibits such problems, and demonstrate how to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Our results are quantitative: we can establish a link (in our model) between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. Moreover, we can also demonstrate the way in which \emph{reported} incidents of crime (those reported by residents) and \emph{discovered} incidents of crime (i.e. those directly observed by police officers dispatched as a result of the predictive policing algorithm) interact: in brief, while reported incidents can attenuate the degree of runaway feedback, they cannot entirely remove it without the interventions we suggest.

[1]  Hosam M. Mahmoud,et al.  Exactly Solvable Balanced Tenable Urns with Random Entries via the Analytic Methodology , 2012, ArXiv.

[2]  Seth Neel,et al.  Rawlsian Fairness for Machine Learning , 2016, ArXiv.

[3]  Richard A. Berk,et al.  Statistical Procedures for Forecasting Criminal Behavior , 2013 .

[4]  Henrik Renlund Generalized Pólya Urns via Stochastic Approximation , 2010, 1002.3716.

[5]  Aaron Roth,et al.  Fairness in Learning: Classic and Contextual Bandits , 2016, NIPS.

[6]  K. Lum,et al.  To predict and serve? , 2016 .

[7]  Carter C. Price,et al.  Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations , 2013 .

[8]  Sampath Kannan,et al.  Fairness Incentives for Myopic Agents , 2017, EC.

[9]  A. Roth,et al.  Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria , 1998 .

[10]  Andrea L. Bertozzi,et al.  Randomized Controlled Field Trials of Predictive Policing , 2015 .

[11]  D. Horvitz,et al.  A Generalization of Sampling Without Replacement from a Finite Universe , 1952 .

[12]  Aaron Roth,et al.  Fair Learning in Markovian Environments , 2016, ArXiv.

[13]  A survey of random processes with reinforcement , 2007, math/0610076.

[14]  Seth Neel,et al.  Fair Algorithms for Infinite and Contextual Bandits , 2016, 1610.09559.

[15]  Phillip S. Kott,et al.  NATIONAL SURVEY ON DRUG USE AND HEALTH A COMPARISON OF VARIANCE ESTIMATION METHODS FOR REGRESSION ANALYSES WITH THE MENTAL HEALTH SURVEILLANCE STUDY CLINICAL SAMPLE , 2017 .