News | Dictation Systems | March 07, 2016

M*Modal Highlights Computer-Assisted Physician Documentation at HIMSS 2016

New capability of Fluency Direct platform delivers automated, real-time, interactive insights to clinician documentation in the EHR

March 7, 2016 — M*Modal announced that its clinical intelligence platform has been broadly embraced by physicians in over 150 sites since the launch of the application in 2015. The solution includes Computer-Assisted Physician Documentation (CAPD), augmenting its M*Modal Fluency Direct to deliver automated, real-time and interactive insights to clinicians as they document in the electronic health record (EHR) for continuous improvement in how doctors care for their patients.

More than 200,000 physicians rely on M*Modal’s cloud-based Speech Understanding technology to accurately tell their patients’ stories. A year after the launch, the clinical intelligence application is already used by more than 150 customer sites. This interactive clinical intelligence system automatically brings the right information to the physician at the right point in time within the clinical workflow to deliver smarter care.

By utilizing the CAPD functionality of M*Modal’s speech recognition solution, Fluency Direct, to support clinical documentation improvement (CDI), healthcare organizations have reported a 30 percent reduction in retrospective queries and amendments to the documentation. This significantly increases efficiency and documentation accuracy for physicians. Moreover, 70 percent of physicians utilizing this system are interacting with the clinical insights, increasing physician technology adoption and engagement.

The three keys to such rapid and successful adoption are the accuracy of the insights derived using Natural Language Understanding of both structured and unstructured patient information, the unique ambient user experience, and the ease of technology integration.

“We have successfully and seamlessly integrated M*Modal’s CAPD into our physician workflows by incorporating it into our EHR. This CAPD provides information embedded in current workflows so the information sharing is non-disruptive, but still results in physician behavior change,” said John Showalter, M.D., chief health information officer at the University of Mississippi Medical Center. “Importantly, the real-time information shared can be personalized to maximize efficiency while minimizing workflow disruptions. Advanced analytics about the physician’s documentation can be done retrospectively using the application’s ‘silent-mode’ with no workflow disruption. The combination of real-time support and advanced analytics gives us the knowledge we need to improve our physicians’ documentation practices.”

M*Modal’s interactive documentation system is compatible with all leading EHRs, requiring no deep systems integration for ease and speed of deployment.

M*Modal displayed Fluency Direct with Computer-Assisted Physician Documentation at the 2016 Health Information and Management Systems Society (HIMSS) conference, Feb. 29-March 4 in Las Vegas.

For more information: www.mmodal.com

Related Content

AIR Recon DL delivers shorter scans and better image quality (Photo: Business Wire)

AIR Recon DL delivers shorter scans and better image quality (Photo: Business Wire).

News | Artificial Intelligence | May 29, 2020
May 29, 2020 — GE Healthcare announced U.S.
The paradox is that COVID-19 has manifested the critical need for exactly what the rules require: advancement of interoperability and digital online access to clinical data and imaging, at scale, for care coordination and infection control.

The paradox is that COVID-19 has manifested the critical need for exactly what the rules require: advancement of interoperability and digital online access to clinical data and imaging, at scale, for care coordination and infection control. Getty Images

Feature | Coronavirus (COVID-19) | May 28, 2020 | By Matthew A. Michela
One year after being proposed, federal rules to advance interoperability in healthcare and create easier access for p
The opportunity to converge the silos of data into a cross-functional analysis can provide immense value during the COVID-19 outbreak and in the future

Getty Images

Feature | Coronavirus (COVID-19) | May 28, 2020 | By Jeff Vachon
In the midst of the coronavirus pandemic normal
In April, the U.S. Food and Drug Administration (FDA) cleared Intelerad’s InteleConnect EV solution for diagnostic image review on a range of mobile devices.
Feature | PACS | May 27, 2020 | By Melinda Taschetta-Millane
Fast, easily accessible patient images are crucial in this day and age, as imaging and medical records take on a new
 Recently the versatility of mixed and augmented reality products has come to the forefront of the news, with an Imperial led project at the Imperial College Healthcare NHS Trust. Doctors have been wearing the Microsoft Hololens headsets whilst working on the front lines of the COVID pandemic, to aid them in their care for their patients. IDTechEx have previously researched this market area in its report “Augmented, Mixed and Virtual Reality 2020-2030: Forecasts, Markets and Technologies”, which predicts th

Doctors wearing the Hololens Device. Source: Imperial.ac.uk

News | Artificial Intelligence | May 22, 2020
May 22, 2020 — Recently the versatility of
Actionable insight “beyond the diagnosis” enables health researchers to better understand COVID-19 progression, intervention effectiveness, and impacts on healthcare system
News | Coronavirus (COVID-19) | May 20, 2020
May 20, 2020 — Change Healthcare introduced ...
Examples of chest CT images of COVID-19 (+) patients and visualization of features correlated to COVID-19 positivity. For each pair of images, the left image is a CT image showing the segmented lung used as input for the CNN (convolutional neural network algorithm) model trained on CT images only, and the right image shows the heatmap of pixels that the CNN model classified as having SARS-CoV-2 infection (red indicates higher probability). (a) A 51-year-old female with fever and history of exposure to SARS-

Figure 1: Examples of chest CT images of COVID-19 (+) patients and visualization of features correlated to COVID-19 positivity. For each pair of images, the left image is a CT image showing the segmented lung used as input for the CNN (convolutional neural network algorithm) model trained on CT images only, and the right image shows the heatmap of pixels that the CNN model classified as having SARS-CoV-2 infection (red indicates higher probability). (a) A 51-year-old female with fever and history of exposure to SARS-CoV-2. The CNN model identified abnormal features in the right lower lobe (white color), whereas the two radiologists labeled this CT as negative. (b) A 52-year-old female who had a history of exposure to SARS-CoV-2 and presented with fever and productive cough. Bilateral peripheral ground-glass opacities (arrows) were labeled by the radiologists, and the CNN model predicted positivity based on features in matching areas. (c) A 72-year-old female with exposure history to the animal market in Wuhan presented with fever and productive cough. The segmented CT image shows ground-glass opacity in the anterior aspect of the right lung (arrow), whereas the CNN model labeled this CT as negative. (d) A 59-year-old female with cough and exposure history. The segmented CT image shows no evidence of pneumonia, and the CNN model also labeled this CT as negative.  

News | Coronavirus (COVID-19) | May 19, 2020
May 19, 2020 — Mount Sinai researchers are the first in the country to use...