Concrete Safety for ML Problems: System Safety for ML Development and Assessment

Many stakeholders struggle to make reliances on ML-driven systems due to the risk of harm these systems may cause. Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements. Moreover, such risks in complex ML-driven systems present a special challenge as they are often difficult to foresee, arising over periods of time, across populations, and at scale. These risks often arise not from poor ML development decisions or low performance directly but rather emerge through the interactions amongst ML development choices, the context of model use, environmental factors, and the effects of a model on its target. Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems. In this work, we apply a state-of-the-art systems safety approach to concrete applications of ML with notable social and ethical risks to demonstrate a systematic means for meeting the assurance requirements needed to argue for safe and trustworthy ML in sociotechnical systems.

[1]  Joshua A. Kroll,et al.  From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML , 2022, CHI.

[2]  Hanna M. Wallach,et al.  Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support , 2021, Proc. ACM Hum. Comput. Interact..

[3]  Amandalynne Paullada,et al.  AI and the Everything in the Whole Wide World Benchmark , 2021, NeurIPS Datasets and Benchmarks.

[4]  Nicholas Carlini,et al.  Unsolved Problems in ML Safety , 2021, ArXiv.

[5]  Inseok Jang,et al.  STPA-Based Hazard and Importance Analysis on NPP Safety I&C Systems Focusing on Human-System Interactions , 2021, Reliab. Eng. Syst. Saf..

[6]  Christo Wilson,et al.  Building and Auditing Fair Algorithms: A Case Study in Candidate Screening , 2021, FAccT.

[7]  Angela E. Kilby,et al.  Algorithmic Fairness in Predicting Opioid Use Disorder using Machine Learning , 2021, FAccT.

[8]  S. Perkowitz The Bias in the Machine: Facial Recognition Technology and Racial Disparities , 2021 .

[9]  The Oxford Handbook of Ethics of AI , 2020 .

[10]  Andrew Smart,et al.  Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context , 2020, ArXiv.

[11]  Andrew Smart,et al.  Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics , 2020, ArXiv.

[12]  Momin M. Malik,et al.  A Hierarchy of Limitations in Machine Learning , 2020, ArXiv.

[13]  Inioluwa Deborah Raji,et al.  Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing , 2020, FAT*.

[14]  Brian W. Powers,et al.  Dissecting racial bias in an algorithm used to manage the health of populations , 2019, Science.

[15]  Danah Boyd,et al.  Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.

[16]  Miroslav Dudík,et al.  Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.

[17]  Nancy G. Leveson,et al.  Systems-Theoretic Process Analysis of space launch vehicles , 2018, Journal of Space Safety Engineering.

[18]  Nancy G. Leveson,et al.  Hazard Analysis of Complex Spacecraft Using Systems-Theoretic Process Analysis , 2014 .

[19]  A. O'Connor,et al.  The economics of pain management. , 2013, The Veterinary clinics of North America. Food animal practice.

[20]  Nancy G. Leveson,et al.  Engineering a Safer World: Systems Thinking Applied to Safety , 2012 .

[21]  J. Carroll,et al.  Moving Beyond Normal Accidents and High Reliability Organizations: A Systems Approach to Safety in Complex Systems , 2009 .

[22]  C. H. Lie,et al.  Fault Tree Analysis, Methods, and Applications ߝ A Review , 1985, IEEE Transactions on Reliability.

[23]  Negar Rostamzadeh,et al.  Sociotechnical Harms: Scoping a Taxonomy for Harm Reduction , 2022, ArXiv.

[24]  D. Mulligan This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology , 2019 .

[25]  J. Reidenberg,et al.  Accountable Algorithms , 2016 .

[26]  H. Schneider Failure mode and effect analysis : FMEA from theory to execution , 1996 .

[27]  J. E. Groves,et al.  Made in America: Science, Technology and American Modernist Poets , 1989 .