Explainable Artificial Intelligence for Safe Intraoperative Decision Support.

What Is the Innovation? Intraoperative adverse events are a common and important cause of surgical morbidity.1,2 Strategies to reduce adverse events and mitigate their consequences have traditionally focused on surgical education, structured communication, and adverse event management. However, until now, little could be done to anticipate these events in the operating room. Advances in both data capture in the operating room and explainable artificial intelligence (XAI) techniques to process these data open the way for real-time clinical decision support tools that can help surgical teams anticipate, understand, and prevent intraoperative events. In a systematic review, 64% of studies reported improvements in clinical decisions with automated decision support, especially if suggestions were provided at the same time as the task.3 Machine learning (ML) techniques can provide this real-time decision support, estimating risk automatically from patient and intraoperative data. However, there has been hesitation to adopt ML techniques in health care4 because these systems can have rare catastrophically incorrect predictions, and high accuracies can be achieved in unexpected ways, such as recognizing patterns in the manner of data recording, rather than in the content of the data themselves. Explainable artificial intelligence is a collection of algorithms that improve on traditional ML techniques by providing the evidence behind predictions. For example, while a traditional ML algorithm in radiology may predict that an image contains evidence of cancer, an XAI system will indicate what and where that evidence is (eg, 3 cm, right lower lobe nodule). In 2018, Lundberg et al5 developed an XAI-based warning system called Prescience that predicts hypoxemia during surgical procedures up to 5 minutes before it occurs. This system monitors vital signs and provides the clinician with a risk score that updates in real time. It also continuously updates the clinician with reasons for its predictions, listing risk factors such as vital sign abnormalities and patient comorbidities. This can act like an additional vital sign, regularly updating information to warn the anesthetist in real time about upcoming risk. With XAI, surgeons can receive similar warnings about upcoming intraoperative events to augment their clinical judgement, helping to avoid complications. Our team is currently working in surgical XAI to use laparoscopic videos to warn surgeons about upcoming bleeding events in the operating room and explain this risk in terms of patient and surgical factors. By anticipating and avoiding adverse events, surgical teams may be able to reduce operative times and improve outcomes for patients.