News | Nuclear Imaging | March 21, 2019

Improving Molecular Imaging Using a Deep Learning Approach

New technique has the potential to improve the quality and speed of imaging

Improving Molecular Imaging Using a Deep Learning Approach

March 21, 2019  — Generating comprehensive molecular images of organs and tumors in living organisms can be performed at ultra-fast speed using a new deep learning approach to image reconstruction developed by researchers at Rensselaer Polytechnic Institute.

The research team’s new technique has the potential to vastly improve the quality and speed of imaging in live subjects and was the focus of an article recently published in Light: Science and Applications, a Nature journal.1

Compressed sensing-based imaging is a signal processing technique that can be used to create images based on a limited set of point measurements. Recently, a Rensselaer research team proposed a novel instrumental approach to leverage this methodology to acquire comprehensive molecular data sets, as reported in Nature Photonics.2 While that approach produced more complete images, processing the data and forming an image could take hours.

This latest methodology developed at Rensselaer builds on the previous advancement and has the potential to produce real-time images, while also improving the quality and usefulness of the images produced. This could facilitate the development of personalized drugs, improve clinical diagnostics or identify tissue to be excised.

In addition to providing an overall snapshot of the subject being examined, including the organs or tumors that researchers have visually targeted with the help of fluorescence, this imaging process can reveal information about the successful intracellular delivery of drugs by measuring the decay rate of the fluorescence.

To enable almost real-time visualization of molecular events, the research team has leveraged the latest developments in artificial intelligence. The vastly improved image reconstruction is accomplished using a deep learning approach. Deep learning is a complex set of algorithms designed to teach a computer to recognize and classify data. Specifically, this team developed a convolutional neural network architecture that the Rensselaer researchers call Net-FLICS (fluorescence lifetime imaging with compressed sensing).

“This technique is very promising in getting a more accurate diagnosis and treatment,” said Pingkun Yan, co-director of the Biomedical Imaging Center at Rensselaer. “This technology can help a doctor better visualize where a tumor is and its exact size. They can then precisely cut off the tumor instead of cutting a larger part and spare the healthy, normal tissue.”

Yan developed this approach with corresponding author Xavier Intes, the other co-director of the Biomedical Imaging Center at Rensselaer, which is part of the Rensselaer Center for Biotechnology and Interdisciplinary Studies. Doctoral students Marien Ochoa and Ruoyang Yao supported the research.

“At the end, the goal is to translate these to a clinical setting. Usually when you have clinical systems you want to be as fast as possible,” said Ochoa, as she reflected on the speed with which this new technique allows researchers to capture these images.

Further development is required before this new technology can be used in a clinical setting. However, its progress has been accelerated by incorporating simulated data based on modeling, a particular specialty for Intes and his lab.

“For deep learning usually you need a very large amount of data for training, but for this system we don’t have that luxury yet because it’s a very new system,” said Yan.

He said that the team’s research also shows that modeling can innovatively be used in imaging, accurately extending the model to the real experimental data.

For more information: www.nature.com/lsa

 

References

1. Yao R., Ochoa M., Yan P., Intes X. Net-FLICS: fast quantitative wide-field fluorescence lifetime imaging with compressed sensing – a deep learning approach. Light: Science and Applications, March 6, 2019. https://doi.org/10.1038/s41377-019-0138-x

2. Pian Q., Yao R., Sinsuebphon N., Intes X. Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging. Nature Photonics, June 5, 2017.

Related Content

Warm Springs Health & Wellness Center Implements Digisonics Solution for OB Ultrasound
News | Ultrasound Women's Health | June 17, 2019
Warm Springs Health & Wellness Center in Warm Springs, Ore., has selected the Digisonics OB PACS (picture archiving...
International Working Group Releases New Multiple Myeloma Imaging Guidelines

X-ray images such as the one on the left fail to indicate many cases of advanced bone destruction caused by multiple myeloma, says the author of new guidelines on imaging for patients with myeloma and related disorders. Image courtesy of Roswell Park Comprehensive Cancer Center.

News | Computed Tomography (CT) | June 17, 2019
An International Myeloma Working Group (IMWG) has developed the first set of new recommendations in 10 years for...
Konica Minolta Healthcare Introduces New Financing Services Program for Exa Enterprise Imaging
News | Enterprise Imaging | June 17, 2019
June 17, 2019 – Konica Minolta Healthcare Americas Inc.
M*Modal and Community Health Network Partner on AI-powered Clinical Documentation
News | PACS Accessories | June 13, 2019
M*Modal announced that the company and Community Health Network (CHNw) are collaborating to transform the patient-...
iCAD Introduces ProFound AI for 2D Mammography in Europe
News | Artificial Intelligence | June 13, 2019
iCAD Inc. announced the launch of ProFound AI for 2D Mammography in Europe. This software is the latest addition to...
A static image drawn from a stack of brain MR images may illustrate the results of a study. But a GIF (or MP4 movie), created by the Cinebot plug-in, can scroll through that stack, providing teaching moments for residents and fellows at Georgetown University

A static image drawn from a stack of brain MR images may illustrate the results of a study. But a GIF (or MP4 movie), created by the Cinebot plug-in, can scroll through that stack, providing teaching moments for residents and fellows at Georgetown University. Image courtesy of MedStar Georgetown University Hospital

Feature | Information Technology | June 13, 2019 | By Greg Freiherr
Editor’s note: This article is the third in a content series by Greg Freiherr covering the Society for Imaging In
Studycast PACS Adds Two-factor Authentication to Improve Data Privacy and Security
News | Cybersecurity | June 12, 2019
Core Sound Imaging announced the addition of two-factor authentication (2FA) to the security measures available for the...
The Current Direction of Healthcare Reform Explained by CMS Administrator Seema Verma
News | Radiology Business | June 11, 2019
June 11, 2019 — Centers for Medicare and Medicaid Services (CMS) Administrator Seema Verma addressed the American Med
Aidoc Earns FDA Approval for AI for C-spine Fractures
Technology | Artificial Intelligence | June 11, 2019
Radiology artificial intelligence (AI) provider Aidoc announced the U.S. Food and Drug Administration (FDA) has cleared...
Medivis SurgicalAR Gets FDA Clearance
Technology | Virtual and Augmented Reality | June 10, 2019
Medivis announced that its augmented reality (AR) technology platform for surgical applications, SurgicalAR, has...