Advances in healthcare practice and technology are largely driven by research, and in recent years perhaps no discipline has been more influential than informatics — the study and practice of creating, storing, finding, sharing and manipulating information. More efficient storage and sharing of medical data are crucial in the transition to value-based healthcare, and radiology plays a central role in these efforts. Several major studies published in 2016 examined the role of informatics technology in medical imaging, and many of those studies were highlighted in a session at the 102nd annual meeting of the Radiological Society of North America (RSNA), Nov. 27-Dec. 2, 2016, in Chicago.
The session was conducted by William Hsu, Ph.D., assistant prof. in residence in the Department of Radiological Sciences at the University of California Los Angeles (UCLA), and Charles E. Kahn, Jr., M.D., MS, professor and vice chairman of radiology at the Perelman School of Medicine at the University of Pennsylvania.
The presented research highlighted four primary roles informatics can play in a radiology setting:
• Interoperability and communication
• Natural language processing
• Radiomics and radiogenomics
• Machine learning
Interoperability and Communication
Naturally, improved storage and sharing of information can greatly improve the lines of communication both among the healthcare team and with their patients. For providers, this leads to better cooperation and coordination, which for patients translates into better care.
One way that improved coordination can manifest is through better standardization of care. A study published in the December 2016 Journal of Vascular and Interventional Radiology looked at physician response to newly created standardized reporting templates for five interventional radiology (IR) procedures. The templates were distributed to 20 IR practices, and the researchers evaluated template adoption (via random sampling) and physician satisfaction (via survey) after one year. Ten institutions implemented the new templates, with a total mean report usage of 57 percent. Each practice modified the templates according to their own preferences, editing for length, wordiness and/or the number of required fill-in fields. The researchers saw a significant positive correlation between a reduced number of fill-in fields and template adoption.1
Improving the readability and ease of sharing of radiology results offers great opportunities for radiologists to enhance their value to other members of the healthcare team — an idea supported by recent research. A study published online last May by the journal Radiology asked 30 primary care providers (PCPs) about how they follow up on incidental imaging findings from radiology reports. Some providers shared that while they felt compelled to do follow-up, they found the process frustrating if there were no explicit recommendations in the radiology report.2
While results like these are largely qualitative in nature, informatics technology and the improved coordination it brings can have a quantifiable positive impact on patient care. A study from the August 2016 Journal of the American Medical Informatics Association found that quicker access to patient data via health information exchange (HIE) resulted in faster time to treatment in the emergency department of a large academic medical center — for each one-hour reduction in access time, visit length was 52.9 minutes shorter, likelihood of imaging was lower (by up to 2.5 percent, depending on modality), likelihood of admission was reduced 2.4 percent and average charges were $1,187 lower.3
Thanks to patient portals, electronic medical records and other similar technologies, patients are seeing similar benefits from increased contact with radiologists. Such systems are still developing and provider adoption is still setting in, but research suggests patients do find value in them. A cross-sectional study published in the June 2016 issue of Academic Radiology looked at over 129,000 patients who had online portal access in a large health system in 2014. The primary point of interest was how many patients viewed their radiology reports, lab results and any clinical notes. Out of 61,131 patients with at least one radiology report available, 31,308 (51.2 percent) viewed them. Not surprisingly, patients who also viewed lab results or clinical notes were significantly more likely to view their radiology reports.4
The researchers also collected socio-demographic data to assess factors that might influence report viewing. Women, patients 25-39 years old and English speakers were the most likely to access their results online (all three groups saw above 50 percent usage). Traditionally underserved populations, including Medicare patients and African-Americans, were among the least likely to access online reports.4
At the cutting edge of informatics is deep learning, a term heard at numerous sessions and vendor booths throughout RSNA 2016. Deep learning (and related concepts like artificial intelligence and machine learning) involves teaching a computer to recognize patterns, building a reference library to the point the computer can eventually identify and predict patterns for itself. Deep learning research as applied to radiology has focused on building algorithms to review imaging cases so the computer can eventually recognize various disease states and diagnoses for itself.
This type of computer-aided diagnosis was tested in several recent studies. One retrospective study by a team from the University of California-Irvine tested an automated system for detection and anatomic localization of traumatic thoracic and lumbar vertebral body fractures in computed tomography (CT) scans. The study set consisted of exams from 104 patients — 94 were positive for fractures and the remaining 10 were controls — with 141 total fractures identified. Locations were marked and classified by a radiologist, and then the images were divided into training and testing subsets. The system detected 28 of 34 findings in the training set, and 87 of 107 findings in the testing set. False-positive rates were 2.5 and 2.7 findings per patient, respectively.5
Teaching a computer to reach this level of diagnostic certainty is no small feat: A significant number of imaging cases must be reviewed to build a suitable reference library from which the machine can draw accurate conclusions. What constitutes a significant number, however, is still being determined, and it can vary from one algorithm to another. A study begun in 2015 at Massachusetts General Hospital (MGH) and Harvard Medical School seeks to determine how much data is needed to train such a system to achieve the necessary high accuracy. The research team initially tasked a convolutional neural network (CNN), a type of machine learning system, to classify axial CT images into six anatomical classes. Six different sizes of training dataset were used (consisting of 5, 10, 20, 50, 100 and 200 images). The system will then be tested on 6,000 CT images from the MGH picture archiving and communication system (PACS) to gauge accuracy.6
Natural Language Processing
One particular aspect of machine learning that is being explored is natural language processing (NLP), where the computer is taught to analyze text and build a vocabulary database. In imaging applications, a system could combine vocabulary analysis with an image reference library to provide decision support in diagnosing a medical condition.
Hsu highlighted a pair of studies published last year that applied NLP algorithms to different medical imaging cases. The first study investigated how data extracted from a mammography report could be used for decision support. The research team built a decision support system (DSS) that picked out Breast Imaging-Reporting and Data System (BI-RADS) descriptors regarding lesion malignancy. The goal was for the DSS to accurately predict diagnosis of breast cancer from the radiology text reports based on the probability of malignancy and the final BI-RADS assessment category. Using a reference standard of 300 mammography reports, the DSS was able to achieve 97.58 percent accuracy in predicting the correct final BI-RADS category.7
Similar efforts were applied to echocardiography reports in a study from Northwestern University Feinberg School of Medicine. This analysis had a slightly broader scale, using a custom-built, echocardiography-specific NLP tool dubbed EchoInfer to extract multiple data elements related to cardiovascular structure and function from reports. The system analyzed 15,116 reports from 1,684 patients, and extracted 59 quantitative and 21 qualitative data elements per report. Sampling from 50 reports, researchers found the system achieved a precision of 94.06 percent and a recall rate of 92.21 percent across all 80 data elements; precision was >97 percent and recall was 92-99.9 percent for an expanded sample size of 400 reports. Any errors were attributed to the non-standardized nature of the reports.8
Radiomics and Radiogenomics
Part of teaching computer algorithms to assist with detection and diagnosis from medical images requires the algorithm to understand the phenotype, or physical appearance, of different types of images. One method for doing so is employing radiomics, where data-characterization algorithms are used to extract quantitative features from medical images.
As with any other type of analysis, however, the results must be reproducible in order to be used confidently. In a study published last March, a team from Columbia University Medical Center explored how different CT imaging parameters and sequences impacted the reproducibility of radiomic data.
CT scans from lung cancer patients were used, and each scan was reconstructed using different parameters, including:
• Six different imaging settings;
• Varying slice thicknesses (1.25, 2.5 and 5 mm); and
• Both sharp and smooth reconstruction algorithms.
The reconstructed images were compared to same-day repeat scans that were reconstructed using the same six imaging settings. The data indicated that radiomic features were largely reproducible across various imaging settings; however, the researchers warned that smooth and sharp reconstruction algorithms should not be used interchangeably.9
The research team conducted a separate analysis to see how different imaging parameters revealed different radiomic features. To achieve this, they examined the repeat scans reconstructed at the same slice thickness but with different algorithms (for a total of three settings). A total of 89 quantitative features were grouped into 15 feature classes. Results showed that basic features such as tumor size, histogram-derived density and shape were reproducible across all settings, while texture features were more inconsistent.9
1. McWilliams, J.P., et al. “Standardized Reporting in IR: A Prospective
Multi-Institutional Pilot Study,” Journal of Vascular and
Interventional Radiology, Dec. 2016.
2. Zafar, H.M., Bugos, E.K., Langlotz, C.P., Frasso, R. “Chasing a Ghost:
Factors that Influence Primary Care Physicians to Follow Up on
Incidental Imaging Findings,” Radiology, published online May 17,
3. Everson, J., Kocher, K.E., Adler-Milstein, J. “Health information ex
change associated with improved emergency department care
through faster accessing of patient information from outside
organizations,” Journal of the American Medical Informatics As
sociation, Aug. 12, 2016.
4. Miles, R.C., Hippe, D.S., Elmore, J.G., et al. “Patient Access to Online
Radiology Reports: Frequency and Sociodemographic
Characteristics Associated with Use,” Academic Radiology,
published online June 2016.
5. Burns, J.E., Yao, J., Munoz, H., Summers, R.M. “Automated detection,
localization and classification of traumatic vertebral body fractures
in the thoracic and lumbar spine at CT,” Radiology, Jan. 2016. http://
6. Cho, J., Lee, K., Shin, E., et al. “How much data is needed to train
a medical image deep learning system to achieve necessary high
accuracy?,” arXiv:1511.06348v2, Jan. 7, 2016.
7. Bozkurt, S., Gimenez, F., Burnside, E.S., et al. “Using automatically
extracted information from mammography reports for decision
support,” Journal of Biomedical Informatics, Aug. 2016. E-published
July 4, 2016.
8. Nath, C., Albaghdadi, M.S., Jonnalagadda, S.R., “A Natural Language
Processing Tool for Large-Scale Data Extraction from
Echocardiography Reports,” PLoS One, April 28, 2016.
9. Zhao, B., Tan, Y., Tsai, W.Y., et al. “Reproducibility of radiomics for
deciphering tumor phenotype with imaging,” Scientific Reports,
March 24, 2016.