Multimodal Learning 相关度: 9/10

Adaptive Clinical-Aware Latent Diffusion for Multimodal Brain Image Generation and Missing Modality Imputation

Rong Zhou, Houliang Zhou, Yao Su, Brian Y. Chen, Yu Zhang, Lifang He, Alzheimer's Disease Neuroimaging Initiative
arXiv: 2603.09931v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

ACADiff利用临床信息指导扩散模型,用于脑部多模态图像补全,提升诊断性能。

主要贡献

  • 提出ACADiff框架,用于合成缺失的脑部影像模态
  • 利用GPT-4o编码的prompt进行临床指导
  • 在ADNI数据集上验证了ACADiff的优越性

方法论

通过自适应融合和临床指导的扩散模型,逐步去噪潜在表示,实现多模态脑图像的生成和补全。

原文摘要

Multimodal neuroimaging provides complementary insights for Alzheimer's disease diagnosis, yet clinical datasets frequently suffer from missing modalities. We propose ACADiff, a framework that synthesizes missing brain imaging modalities through adaptive clinical-aware diffusion. ACADiff learns mappings between incomplete multimodal observations and target modalities by progressively denoising latent representations while attending to available imaging data and clinical metadata. The framework employs adaptive fusion that dynamically reconfigures based on input availability, coupled with semantic clinical guidance via GPT-4o-encoded prompts. Three specialized generators enable bidirectional synthesis among sMRI, FDG-PET, and AV45-PET. Evaluated on ADNI subjects, ACADiff achieves superior generation quality and maintains robust diagnostic performance even under extreme 80\% missing scenarios, outperforming all existing baselines. To promote reproducibility, code is available at https://github.com/rongzhou7/ACADiff

标签

扩散模型 多模态学习 医学图像 脑部影像 Alzheimer's Disease

arXiv 分类

cs.CV cs.AI