Feature | September 09, 2011 | Don Fallati

Speech Recognition Trends for the Future

Speech recognition already provides many benefits to imaging departments, with even more possibilities in the future

Speech recognition helps radiologists reduce report turnaround time and manage increasing workloads.

Speech recognition has achieved strong adoption in radiology over the past several years, as many hospitals and groups have sought to preserve the convenience and high value of narrative dictation while simultaneously streamlining their production process. The benefits have been clear and centered on improvements in reporting efficiency, namely, substantial report turnaround time reduction, cost savings and integration with picture archiving and communications systems (PACS) workflow.  

For example, when deploying the technology in the self-completion mode (systems also offer the option to delegate the task to a medical editor), radiologists have seen dramatic reductions in report turnaround time from hours to minutes, enabling them to manage increasing workloads.

On the workflow side, speech recognition today can provide a seamless documentation process from directly within the PACS, driven by that system’s worklist and offering concurrent access to images, patient data and context, and the speech recognition system. These benefits and strong ROI results continue to drive demand for the technology among hospitals, imaging centers and radiology groups.

While speech recognition catalyzed a fundamental change in radiology documentation and a concomitant boost in efficiency, the need is at hand for a breakthrough in the technology that will steepen the benefit curve once again. Radiology and the entire healthcare industry are gearing up to tackle an even larger set of objectives. Integrating departmental systems into a comprehensive electronic health record (EHR) system strategy, helping achieve meaningful use and promoting significant patient safety and quality initiatives — all of these goals and more share attention with the ongoing demand for productivity and efficiency improvements.

A key element of the solution to these broader problems starts with the realization that much key medical intelligence is captured in narrative documentation, especially in radiology. In fact, that intelligence is best captured in narrative. The kind of reasoning and rationale information prized by physicians and critical to interpretation and subsequent use of data is variable, rather than highly structured and consistent. If radiology reporting is to contribute meaningful clinical information, the valuable narrative must be transformed into structured and coded data, not simply viewable text.

The Next Step
While some organizations have tried to produce such data by imposing greater degrees of structured reporting, results are limited. Templates and macros play an important role in radiology, but physician resistance to a heavily structured mode of data entry remains high, due both to the time-consuming nature of this method versus dictation, as well as the constraints on capturing critical contextual information that reflects the radiologist’s thought process.

By itself, traditional speech recognition will not generate either the intelligence required or the order-of-magnitude process improvements that will be demanded for documentation in the EHR environment. The solution involves combining speech recognition tightly with another technology, natural language understanding (NLU), which applies sophisticated linguistic and machine-learning techniques to electronic text to comprehend the meaning of the document, not just the words.

Because meaning is understood, a dynamic is created that allows the speech recognition to achieve optimal accuracy, no matter what the radiologist’s dictation style may be. A conversational, natural approach can occur, eliminating the verbal cues, format rules and triggers required by conventional speech recognition.

With speech recognition and NLU technologies natively working together, enhanced accuracy is just one of the benefits. Clinical facts and concepts can also be identified from the dictation, structured to standard medical classifications and important standards, such as HL7 clinical document architecture (CDA) and shared. It is important to note this approach is not two separate technology processes designed simply to tag data elements, which has limited utility, but a real-time ability to create meaningful information from narrative dictation.

When fed to other applications or accessed via a robust query tool that permits reasoning across the data, this information can be analyzed, matched with other relevant data and used to drive actions. What is being described here is far more than a medical search engine, and the union of speech recognition and NLU offers radiology departments high-value benefits that contribute to their diverse strategic objectives.

A few examples include:
• Providing quality improvements in clinical documentation through interactive pre-signature alerts and notifications. For instance, the radiologist can be given an alert if information required for compliance with the Physician Quality Reporting System (formerly PQRI) is not recognized in the dictation, so that immediate correction can be made.
• Delivering to radiologists a summary and timeline of patient findings and changes in critical measurements (e.g., tracking and graphing tumor size over time), derived from information in prior reports.
• Triggering critical results alerts based on the documentation.
• Identifying information on a single patient across multiple reports and on specific disease patient populations, findings and other information of interest to researchers.

Radiology departments in particular are under growing pressure to participate in a range of enterprise data gathering and reporting initiatives to support meaningful use, health information exchange (HIE), accountable care organizations and other strategies. The ability of NLU and speech recognition to convert dictation into context-aware structured content, fully indexed for HL7 CDA, provides an unmatched foundation to contribute to these goals and to ready radiology for myriad future demands. With these tools in hand, radiology is well armed to satisfy the substantial, varied and growing demands in today’s rapidly changing healthcare environment. 

Don Fallati is senior vice president of marketing for M*Modal.

Related Content

Fujifilm to Host Pediatric Imaging Best Practices Symposium at AHRA 2018
News | Pediatric Imaging | July 18, 2018
Fujifilm Medical Systems U.S.A. Inc. announced that it will offer educational opportunities and exhibit its latest...
Study Points to Need for Performance Standards for EHR Usability and Safety
News | Electronic Medical Records (EMR) | July 18, 2018
A novel new study provides compelling evidence that the design, development and implementation of electronic health...
Guerbet, IBM Watson Health Partner on Artificial Intelligence for Liver Imaging
News | Clinical Decision Support | July 10, 2018
Guerbet announced it has signed an exclusive joint development agreement with IBM Watson Health to develop an...
FDA Clears Bay Labs' EchoMD AutoEF Software for AI Echo Analysis
Technology | Cardiovascular Ultrasound | June 19, 2018
Cardiovascular imaging artificial intelligence (AI) company Bay Labs announced its EchoMD AutoEF software received 510(...
News | Remote Viewing Systems | June 14, 2018
International Medical Solutions (IMS) recently announced that the American College of Radiology (ACR) added IMS'...
Wake Radiology Launches First Installation of EnvoyAI Platform
News | Artificial Intelligence | June 13, 2018
Artificial intelligence (AI) platform provider EnvoyAI recently completed their first successful customer installation...
How AI and Deep Learning Will Enable Cancer Diagnosis Via Ultrasound

The red outline shows the manually segmented boundary of a carcinoma, while the deep learning-predicted boundaries are shown in blue, green and cyan. Copyright 2018 Kumar et al. under Creative Commons Attribution License.

News | Ultrasound Imaging | June 12, 2018 | Tony Kontzer
June 12, 2018 — Viksit Kumar didn’t know his mother had...
Zebra Medical Vision Unveils AI-Based Chest X-ray Research
News | Artificial Intelligence | June 08, 2018
June 8, 2018 — Zebra Medical Vision unveiled its Textray chest X-ray research, which will form the basis for a future
Konica Minolta Launches AeroRemote Insights for Digital Radiography
Technology | Analytics Software | June 07, 2018
Konica Minolta Healthcare Americas Inc. announced the release of AeroRemote Insights, a cloud-based, business...
Vinay Vaidya, Chief Medical Information Officer at Phoenix Children’s Hospital

Vinay Vaidya, Chief Medical Information Officer at Phoenix Children’s Hospital

Sponsored Content | Case Study | Artificial Intelligence | June 05, 2018
The power to predict a cardiac arrest, support a clinical diagnosis or nudge a provider when it is time to issue medi
Overlay Init