Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services

Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices. Yet little is known about the concerns of those most likely to be affected by these systems. We report on workshops conducted to learn about the concerns of affected communities in the context of child welfare services. The workshops involved 83 study participants including families involved in the child welfare system, employees of child welfare agencies, and service providers. Our findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making. We identify strategies for improving comfort through greater transparency and improved communication strategies. We discuss the implications of our study for accountable algorithm design for child welfare applications.

[1]  Min Kyung Lee Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division , 2017, CSCW.

[2]  J. Monahan,et al.  Evidence-based sentencing: Public openness and opposition to using gender, age, and race as risk factors for recidivism. , 2016, Law and human behavior.

[3]  A. Wang,et al.  Procedural Justice and Risk-Assessment Algorithms , 2018 .

[4]  Min Kyung Lee,et al.  A Human-Centered Approach to Algorithmic Services: Considerations for Fair and Motivating Smart Community Service Management that Allocates Donations to Non-Profit Organizations , 2017, CHI.

[5]  Min Kyung Lee Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management , 2018, Big Data Soc..

[6]  J. Colquitt On the dimensionality of organizational justice: a construct validation of a measure. , 2001, The Journal of applied psychology.

[7]  Sarah Brayne Big Data Surveillance: The Case of Policing , 2017, American sociological review.

[8]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[9]  Jerald Greenberg,et al.  How, When, and Why Does Outcome Favorability Interact with Procedural Fairness? , 2013 .

[10]  Bernard E. Harcourt,et al.  Risk as a Proxy for Race , 2010 .

[11]  Carlos Eduardo Scheidegger,et al.  Certifying and Removing Disparate Impact , 2014, KDD.

[12]  Robert Jungk,et al.  Future Workshops: How to Create Desirable Futures , 1996 .

[13]  Alexandra Chouldechova,et al.  A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions , 2018, FAT.

[14]  Christopher P. Parker,et al.  Support for affirmative action, justice perceptions, and work attitudes: a study of gender and racial-ethnic group differences. , 1997, The Journal of applied psychology.

[15]  Bernhard Schölkopf,et al.  Avoiding Discrimination through Causal Reasoning , 2017, NIPS.

[16]  Allison Woodruff,et al.  A Qualitative Exploration of Perceptions of Algorithmic Fairness , 2018, CHI.

[17]  Michael Veale,et al.  Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.

[18]  Douglas Schuler,et al.  Participatory Design: Principles and Practices , 1993 .

[19]  Krishna P. Gummadi,et al.  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.

[20]  B. Harcourt,et al.  Risk as a Proxy for Race , 2010 .

[21]  Halil Toros,et al.  Prioritizing Homeless Assistance Using Predictive Algorithms: An Evidence-Based Approach , 2018 .

[22]  P. Manning The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement , 2019, Contemporary Sociology: A Journal of Reviews.

[23]  Krishna P. Gummadi,et al.  Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction , 2018, WWW.

[24]  Brandon L. Garrett,et al.  Judicial appraisals of risk assessment in sentencing. , 2018, Behavioral sciences & the law.

[25]  Virginia E. Eubanks Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , 2018 .

[26]  Mohan S. Kankanhalli,et al.  Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.

[27]  J. Brockner,et al.  An integrative framework for explaining reactions to decisions: interactive effects of outcomes and procedures. , 1996, Psychological bulletin.

[28]  Ilya Shpitser,et al.  Fair Inference on Outcomes , 2017, AAAI.

[29]  George E. Higgins,et al.  Race, Ethnicity, and Experience: Modeling the Public's Perceptions of Justice, Satisfaction, and Attitude Toward the Courts , 2009 .

[30]  Jędrzej Niklas,et al.  PROFILING THE UNEMPLOYED IN POLAND : SOCIAL AND POLITICAL IMPLICATIONS OF ALGORITHMIC DECISION MAKING , 2015 .

[31]  John Langford,et al.  A Reductions Approach to Fair Classification , 2018, ICML.

[32]  Michael Carl Tschantz,et al.  Exploring User Perceptions of Discrimination in Online Targeted Advertising , 2017, USENIX Security Symposium.

[33]  Ellen P. Goodman,et al.  Algorithmic Transparency for the Smart City , 2017 .

[34]  Hetan Shah,et al.  Algorithmic accountability , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.