Greg Freiherr, Industry Consultant
Greg Freiherr, Industry Consultant

Greg Freiherr has reported on developments in radiology since 1983. He runs the consulting service, The Freiherr Group.

Blog | Greg Freiherr, Industry Consultant | Artificial Intelligence| April 18, 2018

Why We Have to Pay Attention to AI Right Now

Why We Have to Pay Attention to AI Right Now

Image courtesy of Pixabay

Where will artificial intelligence (AI) be in a year? Five years? A decade? A century?

In the snapshot of AI we are now viewing, we see where AI is at the present time. If we want AI to reach its potential for helping people, we have to look ahead to where AI will be and adjust what we are doing now in AI.

In imaging we are building digital savants. Among the examples on the exhibit floor of RSNA 2017: GE’s algorithm, embedded on portable X-ray machines to spot diseases such as pneumothorax; and Siemens’ AI-fueled system for optimally positioning patients in high-end CTs. More recently, HeartSmartIMT Plus came to light, its cloud-based algorithms designed to help cardiologists perform echocardiograms on patients in their offices, then analyze the images in the cloud.

These are just a few of the many algorithms being groomed for imaging. And they are only a small slice of the ones being developed for all of healthcare.

AI Tools

At the Healthcare Information and Management Systems Society (HIMSS) 2018 conference, multiple presenters described smart algorithms as tools. That’s all they are, said one presenter after the other. Whether these algorithms are looking for patterns in clinical images or in petabytes of population health data — regardless of whether it is supervised or unsupervised learning — smart algorithms are just tools.

But they learn. And that makes an enormous difference.

These tools are not being asked to interpret tasks as good or bad, or their findings as things that may help or harm a patient. They are simply being coded to stay within the scopes of their tasks. They find things. By design, everything they learn is applied to one task area. They are assistants, limited to specific tasks, focusing on individual problems and accessing highly selective datasets.

But will AI developers be able to put blinders on smart algorithms forever? Will algorithms always be digital beasts of burden?  Already some of the AI tools envisioned for near- or mid-term application are being groomed to examine images so they can “assess” the value of radiology reports. For example, if an algorithm spots an aneurysm in patient images and that aneurysm is not mentioned in the radiology report, the algorithm might flag the report as “incomplete,” pending review by the radiologist.

Smart algorithms may also be tasked with interpreting radiology reports for patients who access the reports and images through portals. Patient engagement is huge and is gathering momentum. To further that engagement, algorithms might be asked to explain findings in language understandable to the patient. To do so would require the algorithm to get to know each patient and tailor responses accordingly.

The simple truth is that we don’t know for sure that we will be able to control what these algorithms will become. Similarly, we cannot know whether we will be able to control the speed at which smart algorithms evolve.

We have some baselines, if we consider our own development. But the applicability is suspect. AI will not be constrained by physical or biological form. And it will be able to learn at unprecedented speeds.

But of greatest concern is that focusing on just one task forces the algorithm into a kind of tunnel vision that can flaw its decision-making. Recently MIT grad student Joy Buolamwini found that a basic type of facial analysis software did not detect her face. Why? Because the coders hadn’t written the algorithm to identify dark skin tones and certain facial structures associated with black people.

It was an all-too-easy oversight. In digital photography, the first color images were calibrated against white. Little wonder that more advanced coding could be similarly colorblind.

Recognizing that smart algorithms are playing increasingly high-profile roles, Buolamwini says she is “on a mission to stop an unseen force that’s rising,” a force she describes as algorithmic bias. Comparing algorithms to viruses in a Ted Talk, she opines that algorithms “can spread bias on a massive scale at a rapid pace.”

How biases might creep into the algorithms being written to analyze radiology data is impossible to say, just as it is impossible to say what effect these biases might have. There may be, however, a simple solution.

Human Governors

Tying AI to human intelligence could serve as a governor, of sorts. For this governor to operate effectively, people must be in the loop when decisions are made. Because workflow will depend on the speed or actions of the human, the algorithms will not be able to go beyond the control of people. This is the comforting implication behind building algorithms that serve as human assistants.

But what if the human in the loop is incompetent or too intellectually lazy to question the conclusions of the algorithm?  And what about algorithms designed to interact with patients? Who will perform quality control in these instances?

Even scarier, a time may come or a circumstance may arise when a person is not directly in the loop.

But again there is a solution. Make the governor an inherent dedication to the patient. How about building healthcare algorithms that put patients first?

In his science fiction, Isaac Asimov described four laws that were intended to keep smart robots from hurting humanity. Time and again, however, the unforeseen mucked things up. An interesting potential:  robots that unknowingly breach the laws because information is kept from them.

We may soon be using algorithms to assess medical reports; to identify weaknesses in human interpretations of medical data; to check whether follow-up tests recommended by a radiologist are done. In each instance, the algorithm will have been trained and then provided with highly selected — and limited — data sets. Not only might limited data impact the effectiveness of the algorithm, this limitation can impose biases.

Should we take the opportunity to teach machines, for example, why follow-up tests recommended by a radiologist are important, rather than to simply spot whether they were done? Should we be teaching algorithms to look out for patient welfare?

Patient centrism could serve as a governor. It would be more effective and more practical than making sure every smart algorithm has a human in the loop — and that human is competent.

Like Asimov’s Laws, writing algorithms that put the “patient first” could be a cornerstone in the evolution of healthcare algorithms.

Related Content

Tru-Vu Monitors Releases New Medical-Grade Touch Screen Display
Technology | Flat Panel Displays | May 17, 2019
Tru-Vu Monitors released the new MMZBTP-21.5G-X 21.5” medical-grade touch screen monitor. It is certified to both UL...
Brain images that have been pre-reviewed by the Viz.AI artificial intelligence software to identify a stroke. The software automatically sends and alert to the attending physician's smartphone with links to the imaging for a final human assessment to help speed the time to diagnosis and treatment. Depending on the type of stroke, quick action is needed to either activate the neuro-interventional lab or to administer tPA. Photo by Dave Fornell.

Brain images that have been pre-reviewed by the Viz.AI artificial intelligence software to identify a stroke. The software automatically sends and alert to the attending physician's smartphone with links to the imaging for a final human assessment to help speed the time to diagnosis and treatment. Depending on the type of stroke, quick action is needed to either activate the neuro-interventional lab or to administer tPA. Photo by Dave Fornell.

Feature | Artificial Intelligence | May 17, 2019 | Inga Shugalo
With its increasing role in medical imaging,...
3 Recommendations to Better Understand HIPAA Compliance
Feature | Information Technology | May 17, 2019 | Carol Amick
According to the U.S.
The webinar "Realizing the Value of Enterprise Imaging: 5 Key Strategies for Success" will outline how to improve patient care, lower costs and reduce IT complexity through a well-designed enterprise Imaging strategy.  Change Healthcare
Webinar | Enterprise Imaging | May 16, 2019
The webinar "Realizing the Value of Enterprise Imaging: 5 Key Strategies for Success" will outline how to improve pat
FDA Clears Aidoc's AI Solution for Flagging Pulmonary Embolism
Technology | Artificial Intelligence | May 15, 2019
Artificial intelligence (AI) solutions provider Aidoc has been granted U.S. Food and Drug Administration (FDA)...
Icon Launches New Clinical Trial Patient Engagement Platform
Technology | Patient Engagement | May 14, 2019
Icon plc announced the release of its web-based clinical trial patient engagement platform, to provide patients with...
King's College London and NVIDIA Build U.K.'s First AI Platform for NHS Hospitals

King's College London will implement NVIDIA DGX-2 systems for AI research in the first phase of the project.

News | Artificial Intelligence | May 13, 2019
NVIDIA and King’s College London announced they are partnering to build an artificial intelligence (AI) platform that...
FDA Approves Zebra Medical Vision's HealthPNX AI Chest X-ray Triage Product
Technology | Artificial Intelligence | May 13, 2019
Zebra Medical Vision has received U.S. Food and Drug Administration (FDA) 510(k) clearance for HealthPNX, an artificial...
IBA Partnering to Develop Advanced Digital Proton Therapy Technologies in Belgium
News | Proton Therapy | May 10, 2019
IBA (Ion Beam Applications SA) announced a research agreement with Skandionkliniken, Université Catholique de Louvain...