Multimodal Learning 相关度: 10/10

MEDSYN: Benchmarking Multi-EviDence SYNthesis in Complex Clinical Cases for Multimodal Large Language Models

Boqi Chen, Xudong Liu, Jiachuan Peng, Marianne Frey-Marti, Bang Zheng, Kyle Lam, Lin Li, Jianing Qiu
arXiv: 2602.21950v1 发布: 2026-02-25 更新: 2026-02-25

AI 摘要

提出了MEDSYN基准,评估MLLM在复杂临床病例中多证据融合的诊断能力,揭示了模型在跨模态证据利用上的不足。

主要贡献

  • 提出了MEDSYN多模态临床基准
  • 揭示了MLLM在诊断中跨模态证据利用的差距
  • 提出了证据敏感度指标并用于改进模型

方法论

构建包含多种临床证据的复杂病例数据集,评估18个MLLM在诊断生成和选择上的表现,并通过消融实验分析模型弱点。

原文摘要

Multimodal large language models (MLLMs) have shown great potential in medical applications, yet existing benchmarks inadequately capture real-world clinical complexity. We introduce MEDSYN, a multilingual, multimodal benchmark of highly complex clinical cases with up to 7 distinct visual clinical evidence (CE) types per case. Mirroring clinical workflow, we evaluate 18 MLLMs on differential diagnosis (DDx) generation and final diagnosis (FDx) selection. While top models often match or even outperform human experts on DDx generation, all MLLMs exhibit a much larger DDx--FDx performance gap compared to expert clinicians, indicating a failure mode in synthesis of heterogeneous CE types. Ablations attribute this failure to (i) overreliance on less discriminative textual CE ($\it{e.g.}$, medical history) and (ii) a cross-modal CE utilization gap. We introduce Evidence Sensitivity to quantify the latter and show that a smaller gap correlates with higher diagnostic accuracy. Finally, we demonstrate how it can be used to guide interventions to improve model performance. We will open-source our benchmark and code.

标签

MLLM 多模态 医学诊断 临床决策 基准测试

arXiv 分类

cs.CL