Podcast | Artificial Intelligence | June 03, 2019

Making AI Safe, Effective and Humane For Imaging

Artificial Intelligence In It For Long-Haul, Says Radiology Leader

Editor’s note: This podcast is the first in a content series by Greg Freiherr covering the Society for Imaging Informatics in Medicine (SIIM) conference in June.

The use of artificial intelligence (AI) is not going away, according to Charles E. Kahn, Jr., M.D., MS, professor and vice chair of radiology at the University of Pennsylvania Perelman School of Medicine in Philadelphia. It has, in fact, been widely used in medical imaging for decades and can be seen today not only in computer-aided detection systems in mammography but in the speech recognition systems that radiologists depend on daily to report their findings. 

As radiologists learn more about what AI can — and cannot — do, they are coming to embrace this technology, Kahn says in a podcast published by Imaging Technology News (ITN). And for good reason. “In many ways AI will be a tool that will help us practice more effectively,” he said in the podcast. “That will reduce some of the cognitive workload; will reduce some of the tedium of dealing with medical images.”

But Kahn noted in the podcast that to be widely accepted AI must hurdle certain obstacles. One is trust; another is effectiveness. The trick to achieving both, he says, is to show “that an algorithm that you built in your setting will actually work in my setting.”

AI-infused algorithms must not only work in different clinical settings, he said in the podcast, but on equipment made by different manufacturers, and on differently formatted data. “We know that if you develop an algorithm that works on manufacturer A’s equipment, when you move to manufacture B’s — or when you change the reconstruction kernel of a CT scan or use a different slice thickness with the scans — all of a sudden, the tool that worked beautifully before may not work suitably.”

Radiologists must shoulder the responsibility for ensuring the safety and effectiveness of AI systems designed for medical imaging, Kahn said: “It is incumbent on all of us as radiologists, when we implement these systems, that we test them rigorously to make sure they work.”

In the podcast, Kahn elaborated on the need for these algorithms to be “humane” — one of three topics (the others being safety and effectiveness) he will address Friday, June 28 from 3 to 5:00 pm as part of his 2019 Dwyer Lecture at the annual meeting of the Society for Imaging Informatics in Medicine (SIIM).

Kahn, who has authored more than 110 scientific publications and edits RSNA’s online journal about AI in radiology, says that to be humane, AI must improve “the care of our patients” while preserving the “dignity, beneficence and autonomy” of physicians.

 

Greg Freiherr is a contributing editor to ITN. Over the past three decades, he has served as business and technology editor for publications in medical imaging, as well as consulted for vendors, professional organizations, academia, and financial institutions.

 

Related content:

PODCAST: Is Artificial Intelligence The Doom of Radiology? 

PODCAST: Radiologists Must Understand AI To Know If It Is Wrong 

Technology Report: Artificial Intelligence 

PODCAST: How to Make Artificial Intelligence a Success in Medicine