Understanding and Utilizing Medical Artificial Intelligence

Medical artificial intelligence is cost-effective, scalable, and often outperforms human providers. Its adoption by patients is critical for providing affordable and high quality healthcare. One important barrier to its adoption is the perception that algorithms are a “black box”—people do not subjectively understand how algorithms make medical decisions, and we find this impairs their utilization. We argue that a second, less obvious part of this problem, is that people also overestimate their objective understanding of medical decisions made by human healthcare providers. In four pre-registered experiments with convenience and nationally representative samples (N = 2,296), we find that people exhibit such an illusory understanding of human medical decision making. This leads people to claim greater understanding of decisions made by human than algorithmic healthcare providers, which makes people more reluctant to utilize algorithmic providers. Fortunately, even brief interventions can reduce this illusory gap in subjective understanding by shattering the illusion of understanding for human medical decision making. Moreover, interventions can also increase subjective understanding of algorithmic decision processes, which increases willingness to utilize algorithmic healthcare providers at no expense to the utilization of human providers. Our results suggest that new (German) regulations proposed to explain medical decisions made by algorithms could increase patient utilization of algorithmic providers, identity a new source of algorithm aversion, and suggest that illusions of understanding loom large for the most similar kind of causal systems—other people.