Multimodal Learning 相关度: 7/10

Explainability in Generative Medical Diffusion Models: A Faithfulness-Based Analysis on MRI Synthesis

Surjo Dey, Pallabi Saikia
arXiv: 2602.09781v1 发布: 2026-02-10 更新: 2026-02-10

AI 摘要

研究通过忠实度分析,提升医学扩散模型在MRI合成中的可解释性,增强AI在医疗应用中的可信度。

主要贡献

  • 提出了基于忠实度的可解释性框架
  • 分析了ProtoPNet, EPPNet, ProtoPool等方法的表现
  • 验证了EPPNet在MRI合成中具有更高的忠实度

方法论

通过分析扩散模型的去噪轨迹,结合原型解释方法,并以忠实度作为评估指标,来理解图像生成的内在机理。

原文摘要

This study investigates the explainability of generative diffusion models in the context of medical imaging, focusing on Magnetic resonance imaging (MRI) synthesis. Although diffusion models have shown strong performance in generating realistic medical images, their internal decision making process remains largely opaque. We present a faithfulness-based explainability framework that analyzes how prototype-based explainability methods like ProtoPNet (PPNet), Enhanced ProtoPNet (EPPNet), and ProtoPool can link the relationship between generated and training features. Our study focuses on understanding the reasoning behind image formation through denoising trajectory of diffusion model and subsequently prototype explainability with faithfulness analysis. Experimental analysis shows that EPPNet achieves the highest faithfulness (with score 0.1534), offering more reliable insights, and explainability into the generative process. The results highlight that diffusion models can be made more transparent and trustworthy through faithfulness-based explanations, contributing to safer and more interpretable applications of generative AI in healthcare.

标签

扩散模型 可解释性 医学影像 MRI 忠实度

arXiv 分类

cs.LG cs.AI