Feature | April 03, 2009 | Kathleen Lang

Speech rec-driven reports perform queries and clinical data mining of evidence-based radiology.

Speech recognition helps radiologists meet requirements for Medicare by supporting standardization.


Speech recognition embedded in the radiology workstation has become a “must have” feature. According to industry experts, the most important recent advancement in the technology has been the ability to integrate speech recognition with other applications such as critical results, peer review, medical reference programs and data mining applications.
Lenox Hill Hospital in New York, NY, and Inland Imaging of Spokane, WA, are among the healthcare providers finding that capturing patient data throughout the radiology reporting process using a speech intelligence system enables evidence-based workflow. At both organizations, the initial impetus for installing speech recognition was improved report turnaround time (TAT), but subsequently, the focus has evolved to include data capture and more standardized reporting.
Standardized reporting, reimbursement
Speech recognition working within PACS/RIS/EHR systems eliminates the need to dictate information that can be “pre-fetched” and populated into the report. When a physician can dictate a report that already has been filled with demographic and clinical information from connected systems, the resulting improvements in accuracy and workflow can directly impact a hospitals’ reimbursement.
At Lenox Hill Hospital, a 652-bed, acute care facility located on Manhattan’s upper east side, the radiology department provides the imaging center for the ER, as well as the hospital’s centers of excellence in internal medicine, cardiovascular disease, orthopedics, sports medicine, otolaryngology/head and neck surgery and maternal/pediatric health.
According to Gerard Durney, vice president for clinical services, Lenox Hill’s radiology department moved to voice recognition with PowerScribe in 2001, and in 2006, began another migration to RadWhere (now owned by Nuance Communications Inc.). The transition to voice recognition was a full metamorphosis from analog to digital.
“By changing our manual processes to go completely paperless, we wanted to create a streamlined, seamless system to capture and maintain data throughout the patient encounter. Speech recognition was a big part of that,” he said.
On the opposite coast, Inland Imaging provides imaging services to more than 20 hospitals and clinics across Washington and Arizona, producing approximately 800,000 radiology reports each year. This premier radiology group has been a development partner with MedQuist Inc. for SpeechQ for Radiology, an interactive front-end speech recognition solution, for several years.
According to Inland Imaging Business Associates CEO Jon Copeland, “With speech recognition, it is much easier to generate standard reports. We have highly sub-specialized physicians for each of our radiology divisions, and we have finally completed standard normal templates and inserts for all of them in the past year. We knew this would benefit both our referring physicians and their hospitals, in terms of reimbursement.”
“From an administrative perspective, we recognized that there is valuable data in the radiology report, particularly when it comes to getting reimbursed,” noted Durney. “The advent of MS-DRGs (Medicare-Severity Diagnosis Related Groups) has created a whole new set of challenges in this area, and speech rec-driven reports can help us meet those requirements by supporting standardization.”
According to Dana Ostrow, Lenox Hill director of customer service and ancillary clinical services, “The key to evidence-based radiology is providing consistency of reports; that is, what data/criteria are included each time. In RadWhere, custom templates pre-open for the radiologists to pull in specific data/measurements.”
At Inland Imaging, automatic data capture became part of a Six Sigma project to support evidence-based radiology. “We wrote a system called RadWorkFlow to insert data directly into the report, capturing information that was fragmented upstream from the radiologist. This included creating Smart Fields that automatically populate fields in the report with patient-specific and exam-related data,” explained Copeland. “Now, all the radiologist has to do is confirm/interpret, not dictate all of the ‘parroted’ information such as contrast and dose.” The program also brings in standard nomenclature — all of which leads to more accurate reporting and proper reimbursement for Inland Imaging and its hospital clients.
Data mining, peer review and continuous improvement
Other applications that Inland Imaging is working to integrate with speech recognition include data mining and peer review (both prospective and retrospective). According to Copeland, “We can insert a pop-up window for a radiologist to read a prior interpretation and agree or disagree. The American College of Radiology stresses prospective review, so we have a randomization algorithm built in.”
In Lenox Hill’s system, a built-in data mining application, Nuance’s RadCube, provides analysis of the structured data gleaned from the radiology reports. “We can now quickly design and conduct a research query to determine pathology/disease states and clinical outcomes based on positive findings and recommendations, something that would have previously taken our FTEs weeks to do,” explained Ostrow.
“For example, we can search for any mention of pulmonary embolisms, and query which test was ordered: a CT (computed tomography) angiogram of the chest, and/or a lung V/Q (ventilation and quantification) scan. We can then narrow the query to those patients with positive findings who had both exams, pulled from thousands of patients.” she said.
Lenox Hill is also able to conduct population analytics and clinical data mining of evidence-based radiology for continuous improvement. “We can analyze radiologist recommendation patterns based on type of examination or ordering physician. This allows us to share feedback with the clinicians, as well as the radiologists, as to how the resources are being used. We are also reviewing turnaround time demands from other departments and identifying outliers, looking for trends to improve overall performance,” said Ostrow.
Added Copeland, “For critical and urgent findings, turnaround time is now so quick that the final report can be ready almost instantly, but it still needs to get to the ordering physician so he or she is notified right away. We added modules to our system to close the loop and confirm the physician got that result. Hospitals are under pressure from the Joint Commission for managing critical results - they love that value-added service.”
NLP and Interactive Reports
Looking ahead, Lenox Hill and Inland Imaging both see the need for radiology to move from free-form text to Natural Language Processing (NLP) for a more structured data analysis.
In the RadWhere application, Lexicon Mediated Entropy Reduction (LEXIMER) is an NLP engine for imaging, designed to diagram medical terms and evaluate the percentage of clinically important findings and rates of recommendation for subsequent action. With “leximerized” structured reports, Ostrow has written searches in the RadWhere databases for keywords such as pneumonia, tracheotomy, central line and pancreatitis, and flagged them for the coders, so that they are noted and dealt with at the point of admission.
Copeland observed, “In the past, the radiology report was a long, static document that took two to three days to process, and then got filed away. Today, it is still too static as mere text. In the next generation, NLP and speech recognition should lead to a report that incorporates multimedia, and a live, interactive database including links, outcomes, medical terminology, chat sessions, and so on, for a whole new level of value and more meaningful service from radiology.
“The interactive report can be broken down into codes to generate claims, for more productivity. It can be matched to ordering protocol, and tested against community outcomes to provide more predictive medicine; in other words, guidelines on which exams to order. It’s all about getting information to the right people faster: the right study for the right patient, based on their signs and symptoms.”
For her “wish list,” Ostrow noted, “We are looking forward to more enhancements in resident workflow, integration with other systems, the ability to capture ‘reason for exam’ as structured content that can be data mined, and more real-time applications such as flagging critical information. We already have templates for specific imaging techniques; in the future, we expect to have more templates specific to who is ordering the study or the reason for the exam.”
According to Copeland, “Radiologists are expert consultants who communicate findings. With support from speech recognition, reports can enable this communication, creating closer links to referring physicians, who are then able to make diagnoses with much more information available to them.”


Subscribe Now