From Blade Runners to Tin Kickers: what the governance of artificial intelligence safety needs to learn from air crash investigators

What should we do when artificial intelligence (AI) goes wrong? AI has huge potential to improve the safety of societally critical systems, such as healthcare and transport, but it also has the potential to introduce new risks and amplify existing ones. For instance, biases in widely deployed diagnostic AI systems could adversely affect the care of a large number of patients (Fraser et al. 2018), and hidden weaknesses in the perception systems of autonomous vehicles may regularly expose road users to significant risk (NTSB 2019). What are the most appropriate strategies for governing the safety of AI-based systems? One answer emerges from taking contrasting looks forwards to our imagined dystopian AI future and backwards to the progressive evolution of aviation safety.

[1]  Carl Macrae,et al.  Governing the safety of artificial intelligence in healthcare , 2019, BMJ Quality & Safety.

[2]  Marcello Ienca,et al.  Artificial Intelligence: the global landscape of ethics guidelines , 2019, ArXiv.

[3]  Anna Jobin,et al.  The global landscape of AI ethics guidelines , 2019, Nature Machine Intelligence.

[4]  Carl Macrae,et al.  Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk , 2021, Risk analysis : an official publication of the Society for Risk Analysis.

[5]  Marina Jirotka,et al.  Ethical governance is essential to building trust in robotics and artificial intelligence systems , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[6]  Enrico Coiera,et al.  Safety of patient-facing digital symptom checkers , 2018, The Lancet.

[7]  M. C. Elish,et al.  Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction , 2019, Engaging Science, Technology, and Society.

[8]  Carl Macrae,et al.  Close Calls: Managing Risk and Resilience in Airline Flight Safety , 2014 .

[9]  Graham Braithwaite,et al.  What do aircraft accident investigators do and what makes them good at it? Developing a competency framework for investigators using grounded theory , 2018 .

[10]  Marina Jirotka,et al.  The Case for an Ethical Black Box , 2017, TAROS.

[11]  Charles Vincent,et al.  Learning from failure: the need for independent safety investigation in healthcare , 2014, Journal of the Royal Society of Medicine.

[12]  Daniel P. Jenkins,et al.  'Remixing Rasmussen': The evolution of Accimaps within systemic accident analysis. , 2017, Applied ergonomics.

[13]  Carl Macrae,et al.  A new national safety investigator for healthcare: the road ahead , 2017, Journal of the Royal Society of Medicine.

[14]  Marina Jirotka,et al.  Robot Accident Investigation: a case study in Responsible Robotics , 2020, Software Engineering for Robotics.

[15]  S. Cave,et al.  Hopes and fears for intelligent machines in fiction and reality , 2019, Nature Machine Intelligence.