The authors present a multimodal machine learning approach to diagnose left ventricular ejection fraction (LVEF) from electrocardiograms (ECGs). The framework combines engineered 12-lead ECG timeseries features with structured electronic health record (EHR) variables to classify LVEF into four clinically used strata. To support model explainability, the authors identified the most influential ECG and EHR features via SHAP attributions. Using retrospective data from Hartford HealthCare, the authors trained XGBoost models on 36,784 ECG-echocardiogram pairs from 30,952 outpatients and evaluated temporal generalizability on 19,966 ECGs from a subsequent period. The multimodal model achieved one-vs-rest AUROCs of 0.95 (severe), 0.92 (moderate), 0.82 (mild), and 0.91 (normal), outperforming ECG-only and EHR-only baselines, and maintained performance under temporal validation. This work supports ECG-based, multimodal LVEF stratification as a practical screening and triage aid to prioritize confirmatory imaging where resources are limited.
Multimodal Machine Learning for Ejection Fraction Diagnosis from Electrocardiograms
A new multimodal ML framework combines ECG and EHR features to classify LVEF, outperforming baselines and maintaining performance under temporal validation.
External source stays available while the OJO article and comment thread stay local.
More in Artificial Intelligence & Machine Learning
view topicA novel framework for fault diagnosis in general aviation aircraft achieves 96.2% Macro-F1 using multi-fidelity digital twins and FMEA-driven fault injection.
A systematic study of weight matrix singular value spectra during transformer pretraining reveals three phenomena that fundamentally change how we understand transformer training.
A novel framework for adaptive and reproducible medical image processing addresses the limitations of current medical imaging research by introducing adaptability and reproducibility.
A new methodology combines hardware and software techniques to reduce computational and memory requirements for multimodal foundation models, with implications for production systems and research.
Comments load interactively on the live page.