EchoPixel allows radiologists to see CT, MRI and ultrasound scans in 3-D.
Medical images take a significant part in patient diagnostics across different care cycle points, including regular screening, diagnosis verification, perioperative planning and follow-up checkup. Accordingly, scanning equipment vendors and clinical stakeholders invest their efforts into improving the quality of the images across different modalities, accelerating their automatic processing algorithms, or even enabling preliminary analysis and segmentation to facilitate accurate scan interpretation and avoid costly medical errors.
Nevertheless, interpretation errors still happen. Due to the complexity of the radiology field itself, a specialist may fail to identify an abnormality on an image, incorrectly classify it or report the findings. For example, one of the studies dedicated to radiology practice errors estimates 3 to 5 percent of cognitive and perceptual errors occur daily across different modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound and X-ray.
While it is impossible to get rid of misinterpretation cases completely, there might be a way to reduce their occurrence with the help of technology, namely augmented reality (AR). It can be used to educate and train radiologists, improving their expertise and skills by allowing them to review a bigger variety of medical cases as well as refresh human anatomy variations.
Why Augmented Reality
Unlike virtual reality that blocks the user’s real-world surrounding, AR enhances it via superimposing digital images on the user’s view of their environment.
Augmented reality is already harnessed in clinical settings for medical training, intervention guidance and patient education, getting extensive investments. The global healthcare AR market is expected to reach $1.32 billion by 2023 and features a range of players from startups to more mature companies, including AccuVein, Orca Health, Iflexion, Brain Power, Augmedix and more.
AR-enhanced radiology training is already commercially available and studied across different modalities. We reviewed some of them to evaluate the potential this technology holds in terms of helping pathologists to
fine-tune their image interpretation skills. This article features two really interesting market and research cases we encountered.
Multimodality 3-D System
California-based EchoPixel specifies the limitations in current patient anatomy visualization approaches that force pathologists and radiologists to solve three-dimensional problems by analyzing two-dimensional images. Usually, medical specialists have to process CT, MRI and other medical scans in their minds, integrating a series of received 2-D images into a 3-D picture of the inspected organ, vessel or tissue as well as the surrounding anatomy. This difference in actual and perceived image may hinder the diagnosis verification or even lead to image misinterpretation.
The company bridges this gap and addresses the 2-D image interpretation complexity issue with their proprietary True 3D system. This solution works with HP Zvr displays, which feature four cameras to trace the user’s head motions and 3-D glasses to transform patient-specific medical images into interactive and three-dimensional AR models. Radiologists and other health specialists can view medical scans with organs, tissues, vessels and suspected abnormalities in 3-D, interact with them in real time, and even dissect the chosen objects with the help of a stylus.
The system creates an unprecedented opportunity for a wide range of medical specialists, facilitating diagnostics, surgical planning and medical education. Radiologists and pathologists can use True 3D to amplify their expertise, gain deeper human anatomy understanding, refresh knowledge and review questionable cases from a different perspective. For example, the system can be particularly useful for tumor response quantification and evaluation after the chemotherapy course across many oncology types and stages.
Currently, PennState Health and Texas A&M Health Science Center use the True 3D system in research and medical training.
Research: Mammographic Interpretation Training Study
According to a 2018 study, conventional training for radiologists still lacks a technical approach. In particular, they report that mammogram interpretation training highly depends on manual segmentation and transcription. With this approach, radiologists and pathologists in training may not receive real-time feedback on their progress and thus won’t be able to grasp some of the interpretation process intricacies emerging from the more advanced training method.
The researchers review the hypothesis of using augmented reality to create an automated training system, backing up current screening education techniques. They propose a system with a cyclic data flow, stylus support for precise interaction with the image, and a virtual menu for easy navigation.
One of its key advantages is that such a system will be independent of a workstation, introducing convenience and mobility into screening education. A trainee shouldn’t switch between different stations to view an image in one location and record their annotations somewhere else, as it usually happens. The system allows superimposing scans onto different surfaces, offering an uninterrupted learning process.
Moreover, the proposed augmented reality training solution allows training progress by recording and saving of achieved transcriptions and annotated markers. It then processes results for further training or post-training.
Since the system should be able to collect and recognize each operation made by the trainee, the separate features of abnormalities, such as breast carcinomas, can be analyzed individually and automatically. Apart from improving screening education, this ability can also boost the research and development process in clinical trials for oncology treatment or offer more precise post-treatment condition evaluation.
With its real-time feedback feature, the proposed AR system will allow both starting and mature health specialists to improve their image interpretation performance. And if it will be able to accumulate training data and analyze it continuously, the system can be further used in clinical trials and decision-making.
AR Ready to Revolutionize Radiology Training
Not yet widely adopted, augmented reality holds a big potential for revolutionizing medical education for radiologists and pathologists. We already see how AR is being appreciated in scientific and clinical communities.
Researchers and commercial users believe that technology is able to boost the learning curve for starting health specialists as well as deepen the existing expertise of more mature professionals, ultimately reducing medical image misinterpretation cases. We support this claim and anticipate AR becoming the standard practice for medical image interpretation training in the near future.
Tatyana Shavel is a VR/AR technology analyst at Iflexion. She works in the intersection of business and technology, exploring the practical use of augmented and virtual reality for smarter business and a better world. In addition to keeping a constant pulse on industry trends, she enjoys digging into data and conducting research.
Related VR and AR Content: