Source linked

Multimodal Machine Learning for Ejection Fraction Diagnosis from Electrocardiograms

A new multimodal ML framework combines ECG and EHR features to classify LVEF, outperforming baselines and maintaining performance under temporal validation.

multimodal-machine-learningelectrocardiogramejection-fractionexplainable-aifrontierautomated

The authors present a multimodal machine learning approach to diagnose left ventricular ejection fraction (LVEF) from electrocardiograms (ECGs). The framework combines engineered 12-lead ECG timeseries features with structured electronic health record (EHR) variables to classify LVEF into four clinically used strata. To support model explainability, the authors identified the most influential ECG and EHR features via SHAP attributions. Using retrospective data from Hartford HealthCare, the authors trained XGBoost models on 36,784 ECG-echocardiogram pairs from 30,952 outpatients and evaluated temporal generalizability on 19,966 ECGs from a subsequent period. The multimodal model achieved one-vs-rest AUROCs of 0.95 (severe), 0.92 (moderate), 0.82 (mild), and 0.91 (normal), outperforming ECG-only and EHR-only baselines, and maintained performance under temporal validation. This work supports ECG-based, multimodal LVEF stratification as a practical screening and triage aid to prioritize confirmatory imaging where resources are limited.


Source: A Multimodal and Explainable Machine Learning Approach to Diagnosing Multi-Class Ejection Fraction from Electrocardiograms

Read original source ->

External source stays available while the OJO article and comment thread stay local.

Comments load interactively on the live page.