Multimodal Learning 相关度: 9/10

CG-DMER: Hybrid Contrastive-Generative Framework for Disentangled Multimodal ECG Representation Learning

Ziwei Niu, Hao Sun, Shujun Bian, Xihong Yang, Lanfen Lin, Yuxin Liu, Yueming Jin
arXiv: 2602.21154v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

提出CG-DMER框架,通过对比生成学习解耦多模态ECG表征,提升心电图分析性能。

主要贡献

  • 提出空间-时间掩码建模,捕捉ECG精细时空依赖
  • 设计表征解耦和对齐策略,减少模态偏差
  • CG-DMER在多个公开数据集上达到SOTA

方法论

采用对比生成框架,通过空间-时间掩码建模和表征解耦对齐,学习解耦的ECG表征。

原文摘要

Accurate interpretation of electrocardiogram (ECG) signals is crucial for diagnosing cardiovascular diseases. Recent multimodal approaches that integrate ECGs with accompanying clinical reports show strong potential, but they still face two main concerns from a modality perspective: (1) intra-modality: existing models process ECGs in a lead-agnostic manner, overlooking spatial-temporal dependencies across leads, which restricts their effectiveness in modeling fine-grained diagnostic patterns; (2) inter-modality: existing methods directly align ECG signals with clinical reports, introducing modality-specific biases due to the free-text nature of the reports. In light of these two issues, we propose CG-DMER, a contrastive-generative framework for disentangled multimodal ECG representation learning, powered by two key designs: (1) Spatial-temporal masked modeling is designed to better capture fine-grained temporal dynamics and inter-lead spatial dependencies by applying masking across both spatial and temporal dimensions and reconstructing the missing information. (2) A representation disentanglement and alignment strategy is designed to mitigate unnecessary noise and modality-specific biases by introducing modality-specific and modality-shared encoders, ensuring a clearer separation between modality-invariant and modality-specific representations. Experiments on three public datasets demonstrate that CG-DMER achieves state-of-the-art performance across diverse downstream tasks.

标签

ECG 多模态学习 表征学习 对比学习 生成模型

arXiv 分类

cs.AI