Multimodal Learning 相关度: 7/10

XMorph: Explainable Brain Tumor Analysis Via LLM-Assisted Hybrid Deep Intelligence

Sepehr Salem Ghahfarokhi, M. Moein Esfahani, Raj Sunderraman, Vince Calhoun, Mohammed Alser
arXiv: 2602.21178v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

XMorph通过LLM辅助的混合深度智能,实现可解释的脑肿瘤诊断,提高了诊断准确率。

主要贡献

  • 提出信息加权边界归一化(IWBN)机制,增强肿瘤形态表示
  • 开发结合GradCAM++和LLM文本解释的双通道可解释AI模块
  • 实现高准确率(96.0%)且可解释的脑肿瘤分类系统

方法论

结合深度学习、IWBN、GradCAM++和LLM,实现脑肿瘤分类,并提供可解释的视觉和文本解释。

原文摘要

Deep learning has significantly advanced automated brain tumor diagnosis, yet clinical adoption remains limited by interpretability and computational constraints. Conventional models often act as opaque ''black boxes'' and fail to quantify the complex, irregular tumor boundaries that characterize malignant growth. To address these challenges, we present XMorph, an explainable and computationally efficient framework for fine-grained classification of three prominent brain tumor types: glioma, meningioma, and pituitary tumors. We propose an Information-Weighted Boundary Normalization (IWBN) mechanism that emphasizes diagnostically relevant boundary regions alongside nonlinear chaotic and clinically validated features, enabling a richer morphological representation of tumor growth. A dual-channel explainable AI module combines GradCAM++ visual cues with LLM-generated textual rationales, translating model reasoning into clinically interpretable insights. The proposed framework achieves a classification accuracy of 96.0%, demonstrating that explainability and high performance can co-exist in AI-based medical imaging systems. The source code and materials for XMorph are all publicly available at: https://github.com/ALSER-Lab/XMorph.

标签

医学图像 深度学习 可解释AI LLM

arXiv 分类

cs.CV cs.AI