What Is the Innovation? Intraoperative adverse events are a common and important cause of surgical morbidity.1,2 Strategies to reduce adverse events and mitigate their consequences have traditionally focused on surgical education, structured communication, and adverse event management. However, until now, little could be done to anticipate these events in the operating room. Advances in both data capture in the operating room and explainable artificial intelligence (XAI) techniques to process these data open the way for real-time clinical decision support tools that can help surgical teams anticipate, understand, and prevent intraoperative events. In a systematic review, 64% of studies reported improvements in clinical decisions with automated decision support, especially if suggestions were provided at the same time as the task.3 Machine learning (ML) techniques can provide this real-time decision support, estimating risk automatically from patient and intraoperative data. However, there has been hesitation to adopt ML techniques in health care4 because these systems can have rare catastrophically incorrect predictions, and high accuracies can be achieved in unexpected ways, such as recognizing patterns in the manner of data recording, rather than in the content of the data themselves. Explainable artificial intelligence is a collection of algorithms that improve on traditional ML techniques by providing the evidence behind predictions. For example, while a traditional ML algorithm in radiology may predict that an image contains evidence of cancer, an XAI system will indicate what and where that evidence is (eg, 3 cm, right lower lobe nodule). In 2018, Lundberg et al5 developed an XAI-based warning system called Prescience that predicts hypoxemia during surgical procedures up to 5 minutes before it occurs. This system monitors vital signs and provides the clinician with a risk score that updates in real time. It also continuously updates the clinician with reasons for its predictions, listing risk factors such as vital sign abnormalities and patient comorbidities. This can act like an additional vital sign, regularly updating information to warn the anesthetist in real time about upcoming risk. With XAI, surgeons can receive similar warnings about upcoming intraoperative events to augment their clinical judgement, helping to avoid complications. Our team is currently working in surgical XAI to use laparoscopic videos to warn surgeons about upcoming bleeding events in the operating room and explain this risk in terms of patient and surgical factors. By anticipating and avoiding adverse events, surgical teams may be able to reduce operative times and improve outcomes for patients.
[1]
K. Rowan,et al.
Risk Stratification Tools for Predicting Morbidity and Mortality in Adult Patients Undergoing Major Surgery: Qualitative Systematic Review
,
2013,
Anesthesiology.
[2]
Peter Jüni,et al.
First-year Analysis of the Operating Room Black Box Study.
,
2018,
Annals of surgery.
[3]
H. Mcdonald,et al.
Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review.
,
2005,
JAMA.
[4]
Teodor P. Grantcharov,et al.
Using Data to Enhance Performance and Improve Quality and Safety in Surgery.
,
2017,
JAMA surgery.
[5]
Scott M. Lundberg,et al.
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery
,
2018,
Nature Biomedical Engineering.
[6]
F. Cabitza,et al.
Unintended Consequences of Machine Learning in Medicine
,
2017,
JAMA.
[7]
Warren S. Sandberg,et al.
The incidence of hypoxemia during surgery: evidence from two institutions
,
2010,
Canadian journal of anaesthesia = Journal canadien d'anesthesie.