News | Artificial Intelligence | August 15, 2018

Intel and Philips Demonstrate CPU Ability in Deep Learning Inference Test Cases

Tests demonstrate significant speed improvements in bone-age-prediction modeling and lung segmentation

Intel and Philips Demonstrate CPU Ability in Deep Learning Inference Test Cases

August 15, 2018 — Intel and Philips recently tested two healthcare uses for deep learning inference models using Intel Xeon Scalable processors and the OpenVINO toolkit. One use case focused on X-rays of bones for bone-age-prediction modeling, the other on computed tomography (CT) scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

Until recently, there was one prominent hardware solution to accelerate deep learning: graphics processing unit (GPUs). By design, GPUs work well with images, but they also have inherent memory constraints that data scientists have had to work around when building some models.

Central processing units (CPUs) – in this case Intel Xeon Scalable processors – do not have those same memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of artificial intelligence (AI) workloads, Intel Xeon Scalable processors can better meet data scientists’ needs than GPU-based systems, according to Intel. As Philips found in the two recent tests, this enables the company to offer AI solutions at lower cost to its customers.

AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals.

Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. CPUs are more suited for low batch or streaming applications. In particular, Intel Xeon Scalable processors offer an affordable, flexible platform for AI models – particularly in conjunction with tools like the OpenVINO toolkit, which can help deploy pre-trained models for efficiency, without sacrificing accuracy.

These tests show that healthcare organizations can implement AI workloads without expensive hardware investments.

The bone-age-prediction model went from an initial baseline test result of 1.42 images per second to a final tested rate of 267.1 images per second after optimizations – an increase of 188 times. The lung-segmentation model surpassed the target of 15 images per second by improving from a baseline of 1.9 images per second to 71.7 images per second after optimizations.

Running healthcare deep learning workloads on CPU-based devices offers direct benefits to companies like Philips, because it allows them to offer AI-based services that do not drive up costs for their end customers, according to Intel. As shown in this test, companies like Philips can offer AI algorithms for download through an online store as a way to increase revenue and differentiate themselves from growing competition.

Multiple trends are contributing to this shift:

  • As medical image resolution improves, medical image file sizes are growing – many images are 1GB or greater;
  • More healthcare organizations are using deep learning inference to more quickly and accurately review patient images; and
  • Organizations are looking for ways to do this without buying expensive new infrastructure.

For more information: www.intel.com, www.usa.philips.com/healthcare

 

Related Content

The study finds it's possible to use commercial facial recognition software to identify people from brain MRI that includes imagery of the face
News | Magnetic Resonance Imaging (MRI) | November 15, 2019
November 15, 2019 — Though identifying data typically are removed from medical image files before they are shared for
EchoGo uses artificial intelligence (AI) to calculate cardiac ultrasound left ventricular ejection fraction (EF), the most frequently used measurement of heart function, left ventricular volumes (LV) and, for the first time for an AI application, automated cardiac strain.

EchoGo uses artificial intelligence (AI) to calculate cardiac ultrasound left ventricular ejection fraction (EF), the most frequently used measurement of heart function, left ventricular volumes (LV) and, for the first time for an AI application, automated cardiac strain.

News | Cardiovascular Ultrasound | November 14, 2019
November 14, 2019 — Ultromics has received 510(k) clearance from the U.S.

Guerbet presented its Contrast&Care injection management solution at ECR 2018

News | Contrast Media | November 13, 2019
November 13, 2019 – Guerbet, a global specialist in...
 MaxQ AI
News | Artificial Intelligence | November 13, 2019
November 13, 2019 – MaxQ AI announced a new partnership agreement with...
 Paxera Ultima 360
News | Enterprise Imaging | November 12, 2019
November 12, 2019 — Medical Imaging developer PaxeraHealth will showcase the
 Mammography doctor
News | Breast Imaging | November 12, 2019
November 12, 2019 — Geisinger has partnered with...