This transcript has been edited for clarity.
Okay. You're in the emergency room, evaluating a patient who comes in with acute shortness of breath. It could be pneumonia, it could be a COPD exacerbation, it could be heart failure. You look at the x-ray to help make your diagnosis — let's say COPD — and then, before you start ordering the appropriate treatment, you see a pop-up in the electronic health record, a friendly AI assistant that says something like, "I'm pretty sure this is heart failure."
What do you do?
This scenario is closer than you think. In fact, scenarios like this are already happening in health systems around the country, sometimes in pilot programs, sometimes with more full-fledged integration. But the point remains: At some point, clinicians' diagnoses are going to be "aided" by AI.
What's the problem with AI predictions? Well, people often complain it's a "black box"; sure, it may tell me it thinks the diagnosis is heart failure, but I don't know why it thinks that. To make AI work well with clinicians, it needs to explain itself.
But a new study suggests that "explainability" of AI predictions doesn't make much difference in how doctors use it.
COMMENTARY
Beware of Biased AI
F. Perry Wilson, MD, MSCE
DisclosuresDecember 19, 2023
This transcript has been edited for clarity.
Okay. You're in the emergency room, evaluating a patient who comes in with acute shortness of breath. It could be pneumonia, it could be a COPD exacerbation, it could be heart failure. You look at the x-ray to help make your diagnosis — let's say COPD — and then, before you start ordering the appropriate treatment, you see a pop-up in the electronic health record, a friendly AI assistant that says something like, "I'm pretty sure this is heart failure."
What do you do?
This scenario is closer than you think. In fact, scenarios like this are already happening in health systems around the country, sometimes in pilot programs, sometimes with more full-fledged integration. But the point remains: At some point, clinicians' diagnoses are going to be "aided" by AI.
What's the problem with AI predictions? Well, people often complain it's a "black box"; sure, it may tell me it thinks the diagnosis is heart failure, but I don't know why it thinks that. To make AI work well with clinicians, it needs to explain itself.
But a new study suggests that "explainability" of AI predictions doesn't make much difference in how doctors use it.
Credits:
Image 1: JAMA
Image 2: JAMA
Image 3: F. Perry Wilson, MD, MSCE
Image 4: F. Perry Wilson, MD, MSCE
Image 5: F. Perry Wilson, MD, MSCE
Medscape © 2023 WebMD, LLC
Any views expressed above are the author's own and do not necessarily reflect the views of WebMD or Medscape.
Cite this: Beware of Biased AI - Medscape - Dec 19, 2023.
Tables
Authors and Disclosures
Authors and Disclosures
Author
F. Perry Wilson, MD, MSCE
Associate Professor, Department of Medicine, Yale School of Medicine; Interim Director, Program of Applied Translational Research, Yale School of Medicine, New Haven, Connecticut
Disclosure: F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.