Radiology Faces Frightening New World
A Roman walks into a bar, holds up two fingers and says, “Five beers please.”
—One of the world’s geekiest jokes
Things change. Sometimes fundamentally. As in the switch from Roman to Arabic numerals and more recently the embrace of base-two. CT? MRI? DR? Without binary math? Fuhgettaboutit!
Today change is not only accelerating … it’s taking us in new directions. Deep learning, a branch of artificial intelligence, will shake up radiology like never before.
Think you understand computer-aided diagnostics? Think again. Machines are.
In August, IBM bought RIS/PACS vendor Merge Healthcare for $1 billion. Why? Largely for Merge’s access to billions of medical images available through its 7,500 customers. IBM figures these images could provide the critical mass that its Watson Health needs to learn how to identify disease as well or better than its predecessor machine-handled contestants on Jeopardy.
And this is not an isolated thought. Software engineers from Enlitic, a San Francisco-based start-up, will soon install a deep-learning algorithm of their own design on PACS at 80 medical imaging centers in Australia and Asia. The algorithm will examine archived medical images until it has learned to identify disease. It will then work with radiologists as part of a diagnostic team in exactly the opposite way that radiologists today work with CAD. Working with Enlitic’s algorithm, radiologists will select the areas of interest and the machine will make the interpretation for review, of course, by radiologists.
If you think this sounds a bit like machines are taking over, read on.
Deep learning algorithms will provide the ultimate clinical decision support, Enlitic founder Jeremy Howard tells me. Tapping into patient and lab information in electronic medical record systems will give these algorithms all the information they need to “advise” physicians about every aspect of patient management from diagnosis to therapy to follow-up.
Machines will connect the dots; draw conclusions; even see what hasn’t been seen before. It’s already happening. A DL algorithm developed at Stanford University learned to read pathology images has found evidence that the cells surrounding cancer cells can help determine patient prognoses. That was news to pathologists, who traditionally have looked only at diseased cells. More surprises are on the way.
When computer-driven diagnostics really takes hold, machines will have mastered the core capabilities that distinguish 80 percent of the workforce in Western nations — reading and writing, speaking and listening, perception, and integrating knowledge.
If radiologists think competition for primary and over reads is tough now, it’s going to get tougher. And they won’t have to wait long.
The signs of a radically different world, birthed by machines that learn, are everywhere. Driverless cars are cruising California streets, distinguishing trees from pedestrians, waiting a moment extra at intersections because analyses show many accidents happen 1.5 seconds after the light turns green.
In an instant, deep learning can translate spoken language to English text to Mandarin text to spoken Mandarin — in a voice indistinguishable from that of the speaker. Amazon suggests new purchases based on ones bought; Google fills in searches based on what we mean to type, as well as what we do.
Enlitic’s learning algorithm could be working in a matter of months, learning and then identifying signs of disease in MR, CT, ultrasound and X-ray images. Five years from now this and other DL-based software could be in use around the world. It won’t be long until their developers will ask, “Why use telepresence to bring experts virtually to patients, when deep learning algorithms can be on-site serving as virtual experts?”
Developing nations will be among the first to buy into this argument, because they are in the direst need of what learning machines can do. DL algorithms might offset the dearth of medical expertise that experts say is holding back the improvement of public health in these countries. If deep learning is proven under such demanding conditions, will the increasingly cost conscious healthcare systems of Western nations be far behind?
We may take solace in the fact that the world has weathered technological revolutions before. But the one now approaching will be unlike any ever seen. Whereas human performance is increasing gradually, the performance of deep learning is growing exponentially.
It’s not a brave new world we are facing. It is a world of unprecedented opportunity and extraordinary peril. And it’s filling our side view mirror … the one that says objects may be closer than they appear.
Editor’s note: This is the first blog in a series of four by industry consultant Greg Freiherr on The Evolution of Radiology.