Feature | Digital Radiography (DR) | January 29, 2019 | By Jeff Zagoudis

Achieving Optimum Reject Rate in Digital Radiography

Improved analytics and technology advancements can help minimize the need for X-ray retakes

In today’s digital environment, a radiologist only sees images saved and shared to the PACS, so a firm understanding of X-ray reject rates is crucial for high image quality and good workflow.

In today’s digital environment, a radiologist only sees images saved and shared to the PACS, so a firm understanding of X-ray reject rates is crucial for high image quality and good workflow.

X-rays were the first medical imaging technology to be invented, and they remain one of the most commonly performed exams worldwide today. The technology has evolved immensely since it was first discovered in 1895, progressing from the original screen film to computed radiography (CR) to today’s digital radiography (DR) units. While the technology has changed, the need for high-quality images at the lowest possible dose to the patient has not, and has even heightened as medicine shifts toward value-based care. For DR, this means optimizing protocols and techniques through a robust system of reject rate analysis.

 

Defining Rejects and Reject Analysis

Before the need to retake an X-ray can be determined, it must be decided what constitutes a “rejected” image. According to Ingrid Reiser, Ph.D., DABR, a clinical diagnostic physicist and associate professor of radiology at the University of Chicago, rejects are patient images that are discarded by the technologist without being presented to the radiologist. She shared the definition as part of a presentation on reject rate analysis at the 2018 Radiological Society of North America (RSNA) annual meeting.

In today’s digital environment, the second part of that definition — without being presented to the radiologist — is the key consideration. A radiologist will only see images that are saved and shared to the picture archiving and communication system (PACS), so they likely have no idea how many images are actually acquired during an exam. In this scenario, having a firm understanding of reject rates is crucial for maintaining high image quality and good workflow. Furthermore, Reiser noted, each image not sent to the radiologist’s workstation represents wasted radiation dose to the patient — a cardinal sin in radiology.

Reiser stressed, however, that the goal of reject rate analysis is not to reduce the rate to zero. Poor-quality images are an unavoidable part of diagnostic imaging, and if they are determined to be poor they should be eliminated from consideration.   

 

Digital Challenges of Reject Analysis

Examining rejects and reject rates tells a story, according to Alisa Walz-Flannigan, Ph.D., diagnostic medical physicist at the Mayo Clinic in Rochester, Minn. Reviewing rejected images (and rejected image data) provides information about image quality standards, inherent imaging problems and modes of failure, as well as how technologists troubleshoot these types of issues.

While technological advances have eliminated many of the challenges of screen-film and computed radiography, digital X-rays come with their own set of difficulties:

  • Many hospitals and imaging centers use imaging systems from multiple vendors, and each vendor may collect different bits of data using different methods;
  • As a result, information retrieval may be cumbersome, and some desired information may not be retrievable at all;
  • Reject analysis may require a software add-on at additional cost; or
  • The reject analysis software could interfere with clinical operation.

 

Quality Control for DR

While imaging standards and practices are unique to every radiology department, there are resources available for practitioners seeking guidance. The American Association of Physicists in Medicine (AAPM) Task Group 151 published a report in 2015 on quality control procedures in digital radiography, with the goal of recommending consistency tests for optimum image acquisition. The task group said the tests should be performed by a medical physicist, or a radiology technologist under the supervision of a medical physicist.

One of the main procedures recommended by the task group is defining a fault tree of actions that need to be taken when certain fault conditions are met. The diagram should begin with a particular problem, such as an artifact in a patient image, and run through possible corrective actions based on the source and severity of the artifact.

The AAPM report also reinforces the importance of performing reject rate analysis: The task force cites a prior study of 18 radiology departments where 14 percent of patient exposure in projection radiography was deemed to be the result of repeated images.

Ultimately, the task group recommends a target reject rate of 8 percent for DR units. A literature review found reject rates for screen-film radiography hovered around 10 percent, with nearly half of all rejects due to exposure errors. The task force arrived at the 8 percent figure for digital with the assumption that exposure errors will be fewer with DR. Ten percent is the recommended threshold for further investigation and possible corrective action.   

 

Starting a Reject Rate Analysis Project

While the principles of reject rate analysis may be straightforward, starting a Reject Rate Analysis Project (RRA) is a major undertaking. Reiser shared her experience leading an RRA project for digital X-ray at the University of Chicago beginning in 2014.

Reiser acknowledged that prior to 2014, all reject rates were self-reported by technologists. The reasons for this were twofold and largely practical: Reject analysis software was optional for many of the department’s DR units, and the features were turned off for systems that were capable of RRA. This was done, Reiser said, to avoid file storage problems with the limited hard drive space.

Quarterly reviews were conducted for each X-ray tech’s workload, and each tech was assigned a quality score for eight randomly chosen exams. (Approximately three exams were chosen per month to ensure a wide range of images were reviewed.) An in-person meeting was conducted with the technologist and the imaging technology coordinator or manager to discuss the findings. Those discussions included:

  • Review of the technologist’s reject rate by category (anatomy, clinical area, artifacts, motion, etc.);
  • Review of any quality improvement (QI) tickets issued and tracked through the PACS by radiologists. Tickets are assigned by improvement category, such as Joint Commission patient safety goals or image quality reasons. Reiser said the QI tickets were  kept as an ongoing quality review measure;
  • Changes in reject rates; and
  • Ideas or feedback for future quality improvements.

 

Any corrective actions, if required, are also identified during the in-person meetings.

 

Designing Interventions

Reiser concluded by offering advice for department heads looking to design and implement their own reject rate analysis interventions.

At the University of Chicago, the radiology department conducts a series of in-service meetings to teach the specific elements of reject analysis. The meetings cover classification of reject categories, a review of current reject rates and discussion of when to reject an image.

To focus on image review, separate in-services are conducted for technologists targeting specific exam types. Example focus areas include portable X-ray exams, wrist imaging, chest X-rays and lumbar spine exams. One rejected image is chosen randomly for each rejection category and compared with the “approved” diagnostic image.

The goal of these meetings, according to Reiser, is to develop image critique skills as a group, and encourage ownership and accountability for exams performed. For this reason, she recommended using the names of the technologists attached to each image set during discussions, rather than coding for usernames.

Walz-Flannigan shared key questions in her RSNA presentation that should be considered during image review:

  • Is the accumulated data accurate?
  • Are standards being followed (with the system setup and technologist use)?
  • Do standards need to be adjusted?
  • Are good images being rejected?
  • What challenges are techs facing?

 

Reiser concluded that while interventions may not always lead to better reject rates, they may still improve image quality, making them a critical component of patient care. 

Related Content

Sponsored Content | Videos | Radiology Imaging | June 13, 2019
In an interview with itnTV, Henry Izawa, vice president, modality solutions and clinical affairs, Fujifilm Medical Sy
A static image drawn from a stack of brain MR images may illustrate the results of a study. But a GIF (or MP4 movie), created by the Cinebot plug-in, can scroll through that stack, providing teaching moments for residents and fellows at Georgetown University

A static image drawn from a stack of brain MR images may illustrate the results of a study. But a GIF (or MP4 movie), created by the Cinebot plug-in, can scroll through that stack, providing teaching moments for residents and fellows at Georgetown University. Image courtesy of MedStar Georgetown University Hospital

Feature | Information Technology | June 13, 2019 | By Greg Freiherr
Editor’s note: This article is the third in a content series by Greg Freiherr covering the Society for Imaging In
Utah Valley Hospital Purchases Nine Carestream Imaging Systems
News | Digital Radiography (DR) | June 12, 2019
Utah Valley Hospital (Provo, Utah) has installed nine Carestream imaging systems that equip its radiology staff to...
SIIM and ACR Host Machine Learning Challenge for Pneumothorax Detection and Localization
News | Artificial Intelligence | June 03, 2019
The Society for Imaging Informatics in Medicine (SIIM) and the American College of Radiology (ACR) are collaborating...
At ACC 2019, Siemens unveiled a version of its go.Top CT optimized for cardiovascular imaging. The newly packaged scanner can generate data needed to do  CT-based FFR (fractional flow reserve).

At ACC 2019, Siemens unveiled a version of its go.Top CT optimized for cardiovascular imaging. The newly packaged scanner can generate data needed to do
CT-based FFR (fractional flow reserve).

Feature | Cardiac Imaging | May 31, 2019 | By Greg Freiherr
The fingerprints of value-added medicine were all over products and works-in-progress on the exhibit floor of the ann
Einstein Healthcare Network found that use of automated power injectors reduced CT contrast extravasation rates over a 30-month period.

Einstein Healthcare Network found that use of automated power injectors reduced CT contrast extravasation rates over a 30-month period.

Feature | Computed Tomography (CT) | May 30, 2019 | By Jeff Zagoudis
As of 2015, approximately 79 million computed tomography (CT) scans were performed each year in the U.S.
Sponsored Content | Webinar | Computed Tomography (CT) | May 30, 2019
This webinar will explain technical considerations when performing cardiac CT angiography in pediatric patients.
Sponsored Content | Webinar | Computed Tomography (CT) | May 30, 2019
Chest pain is one of the most frequent reasons for an evaluation in the emergency room.There are multiple imaging mod
Dynamic Digital Radiography Used to Assess Undifferentiated Dyspnea
News | Digital Radiography (DR) | May 29, 2019
A clinical study presented at the American Thoracic Society (ATS) 2019 annual meeting, May 17-22 in Dallas, described...
Videos | Digital Radiography (DR) | May 20, 2019
This is a quick walk-around video showing the ...