Feature | Artificial Intelligence | April 25, 2017

Low-Cost AI Could Screen for Cervical Cancer Better Than Humans

An artificial intelligence image detection method has the potential to outperform PAP and HPV tests in screening for cervical cancer

Low-Cost AI Could Screen for Cervical Cancer Better Than Humans

April 25, 2017 —Artificial intelligence (AI) is already exceeding human abilities. Self-driving cars use AI to perform some tasks more safely than people. E-commerce companies use AI to tailor product ads to customers' tastes quicker and with more precision than any breathing marketing analyst.

And, soon, AI will be used to "read" biomedical images more accurately than medical personnel alone — providing better early cervical cancer detection at lower cost than current methods.

However, this does not necessarily mean radiologists will soon be out of business.

"Humans and computers are very complementary," said Sharon Xiaolei Huang, Ph.D., associate professor of computer science and engineering at Lehigh University in Bethlehem, Pa. "That's what AI. is all about."

Huang directs the Image Data Emulation & Analysis Laboratory at Lehigh where she works on artificial intelligence related to vision and graphics, or, as she says: "creating techniques that enable computers to understand images the way humans do." Among Huang's primary interests is training computers to understand biomedical images.

Now, as a result of 10 years work, Huang and her team have created a cervical cancer screening technique that, based on an analysis of a very large dataset, has the potential to perform as well or better than human interpretation on other traditional screening results, such as Pap tests and HPV tests — at a much lower cost. The technique could be used in less-developed countries, where 80 percent of deaths from cervical cancer occur.

The researchers are currently seeking funding for the next step in their project, which is to conduct clinical trials using this data-driven detection method.

Watch the video interview "Expanding Role for Artificial Intelligence in Medical Imaging" with Steve Holloway of healthcare market intelligence firm Signify Research at HIMSS 2017.

A more accurate screening tool, at lower cost

Huang's screening system is built on image-based classifiers (an algorithm that classifies data) constructed from a large number of Cervigram images. Cervigrams are images taken by digital cervicography, a noninvasive visual examination method that takes a photograph of the cervix. The images, when read, are designed to detect cervical intraepithelial neoplasia (CIN), which is the potentially precancerous change and abnormal growth of squamous cells on the surface of the cervix.

"Cervigrams have great potential as a screening tool in resource-poor regions where clinical tests such as Pap and HPV are too expensive to be made widely available," said Huang. "However, there is concern about Cervigrams' overall effectiveness due to reports of poor correlation between visual lesion recognition and high-grade disease, as well as disagreement among experts when grading visual findings."

Huang thought that computer algorithms could help improve accuracy in grading lesions using visual information — a suspicion that, so far, is proving correct.

Because Huang's technique has been shown, via an analysis of the very large dataset, to be both more sensitive (able to detect abnormality) as well as more specific (fewer false positives), it could be used to improve cervical cancer screening in developed countries like the United States.

"Our method would be an effective low-cost addition to a battery of tests helping to lower the false positive rate since it provides 10 percent better sensitivity and specificity than any other screening method, including Pap and HPV tests," said Huang.

Correlating visual features and patient data to cancer

To identify the characteristics that are most helpful in screening for cancer, the team created hand-crafted pyramid features (basic components of recognition systems) as well as investigated the performance of a common deep learning framework known as convolutional neural networks (CNN) for cervical disease classification.

They describe their results in an article in the March issue of Pattern Recognition called: "Multi-feature base benchmark for cervical dysplasia classification." The researchers have also released the multi-feature dataset and extensive evaluations using seven classic classifiers here.

To build the screening tool, Huang and her team used data from 1,112 patient visits, where 345 of the patients were found to have lesions that were positive for moderate or severe dysplasia (considered high-grade and likely to develop into cancer) and 767 had lesions that were negative (considered low-grade with mild dysplasia typically cleared by the immune system).

These data were selected from a large medical archive collected by the U.S. National Cancer Institute consisting of information from 10,000 anonymized women who were screened using multiple methods, including Cervigrams, over a number of visits. The data also contains the diagnosis and outcome for each patient.

"The program we've created automatically segments tissue regions seen in photos of the cervix, correlating visual features from the images to the development of precancerous lesions," said Huang. "In practice, this could mean that medical staff analyzing a new patient's Cervigram could retrieve data about similar cases — not only in terms of optics, but also pathology since the dataset contains information about the outcomes of women at various stages of pathology."

From the study: "...with respect to accuracy and sensitivity, our hand-crafted PLBP-PLAB-PHOG feature descriptor with random forest classifier (RF.PLBP-PLAB-PHOG) outperforms every single Pap test or HPV test, when achieving a specificity of 90 percent. When not constrained by the 90 percent specificity requirement, our image-based classifier can achieve even better overall accuracy. For example, our fine-tuned CNN features with Softmax classifier can achieve an accuracy of 78.41 percent with 80.87 percent sensitivity and 75.94 percent specificity at the default probability threshold 0.5. Consequently, on this dataset, our lower-cost image-based classifiers can perform comparably or better than human interpretation based on widely-used Pap and HPV tests..."

According to the researchers, their classifiers achieve higher sensitivity in a particularly important area: detecting moderate and severe dysplasia — or cancer.

Read the article "How Artificial Intelligence Will Change Medical Imaging."

For more information: www.sciencedirect.com

Related Content

This is a lung X-ray reviewed automatically by artificial intelligence (AI) to identify a collapsed lung (pneumothorax) in the color coded area. This AI app from Lunit is awaiting final FDA review and in planned to be integrated into several vendors' mobile digital radiography (DR) systems. Fujifilm showed this software integrated as a work-in-progress into its mobile X-ray system at RSNA 2019. GE Healthcare has its own version of this software for its mobile r=ray systems that gained FDA in 2019.   #RSNA #

This is a lung X-ray reviewed automatically by artificial intelligence (AI) to identify a collapsed lung (pneumothorax) in the color coded area. This AI app from Lunit is awaiting final FDA review and in planned to be integrated into several vendors' mobile digital radiography (DR) systems. Fujifilm showed this software integrated as a work-in-progress into its mobile X-ray system at RSNA 2019. GE Healthcare has its own version of this software for its mobile r=ray systems that gained FDA in 2019.

Feature | RSNA | January 20, 2020 | Dave Fornell, Editor
Here are images of some of the newest new medical imaging technologies displayed on the expo floor at the ...
Videos | RSNA | January 13, 2020
ITN Editor Dave Fornell takes a tour of some of the most innovative new medical imaging technologies displayed on the
This is artificial intelligence on Fujifilm's mobile digital radiography system to immediately detect pneumothorax (a collapsed lung) and show the location to the technologist and attending physician in a unit before the image is even uploaded to the PACS for a read. AI applications like this that have immediate impact on critical patient care saw a lot of interest at RSNA 2019.

This is work-in-progress artificial intelligence app on Fujifilm's mobile digital radiography system to immediately detect pneumothorax (a collapsed lung), The AI highlights the area of interest to show the location to the technologist and attending physician in a unit before the image is even uploaded to the PACS for a read by a radiologist. The technology also can flag the study for an immediate read in the PACS worklist for confirmation by a human. This technology is from a third-party and will be offered on Fujifilm's REiLI AI platform. Applications like this that have immediate impact on critical patient care saw a lot of interest at RSNA 2019. Photos by ITN Editor Dave Fornell.

Feature | Artificial Intelligence | December 27, 2019 | Siddharth Shah and Srikanth Kompalli, Frost & Sullivan
Radiology artificial intelligence (AI) was again the hottest topic at the 2019...
The Radiological Society of North America (RSNA) presented its eighth Alexander R. Margulis Award for Scientific Excellence to Jae Ho Sohn, M.D., from the Radiology & Biomedical Imaging Department at the University of California in San Francisco (UCSF)

Jae Ho Sohn, M.D.

News | RSNA | December 24, 2019
December 24, 2019 — The Radiological Society of North America (R
Videos | RSNA | December 18, 2019
ITN Editor Dave Fornell and ITN Consulting Editor Greg Freiherr offer a post-game report on the trends and technologi
Patient inclusion flowchart shows selection of women in the training and validation samples used for deep neural network development, as well as in the test sample (current study sample). Exclusions are detailed in the footnote. PACS = picture archiving and communication system. Image courtesy of Radiological Society of North America.

Patient inclusion flowchart shows selection of women in the training and validation samples used for deep neural network development, as well as in the test sample (current study sample). Exclusions are detailed in the footnote. PACS = picture archiving and communication system. Image courtesy of Radiological Society of North America.

News | Artificial Intelligence | December 18, 2019
December 17, 2019 — A sophisticated type of artif...
The company has selected Flywheel to support data management and curation and gain efficiencies in their machine learning workflow
News | Artificial Intelligence | December 16, 2019
December 16, 2019 — MaxQ AI, a company focus