Multimodal Learning 相关度: 9/10

MLLM-HWSI: A Multimodal Large Language Model for Hierarchical Whole Slide Image Understanding

Basit Alawode, Arif Mahmood, Muaz Khalifa Al-Radi, Shahad Albastaki, Asim Khan, Muhammad Bilal, Moshira Ali Abdalla, Mohammed Bennamoun, Sajid Javed
arXiv: 2603.23067v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

MLLM-HWSI是一种用于病理全切片图像理解的多模态大语言模型,通过层级结构实现精细化分析。

主要贡献

  • 提出了MLLM-HWSI模型,用于层级WSI理解
  • 引入层级对比目标和跨尺度一致性损失
  • 使用Cell-Cell Attention Fusion (CCAF)提取细胞特征
  • 在13个WSI基准测试中取得SOTA结果

方法论

将WSI分解为多尺度嵌入,通过层级对比和跨尺度一致性损失,融合视觉和文本特征,输入LLM进行推理。

原文摘要

Whole Slide Images (WSIs) exhibit hierarchical structure, where diagnostic information emerges from cellular morphology, regional tissue organization, and global context. Existing Computational Pathology (CPath) Multimodal Large Language Models (MLLMs) typically compress an entire WSI into a single embedding, which hinders fine-grained grounding and ignores how pathologists synthesize evidence across different scales. We introduce \textbf{MLLM-HWSI}, a Hierarchical WSI-level MLLM that aligns visual features with pathology language at four distinct scales, cell as word, patch as phrase, region as sentence, and WSI as paragraph to support interpretable evidence-grounded reasoning. MLLM-HWSI decomposes each WSI into multi-scale embeddings with scale-specific projectors and jointly enforces (i) a hierarchical contrastive objective and (ii) a cross-scale consistency loss, preserving semantic coherence from cells to the WSI. We compute diagnostically relevant patches and aggregate segmented cell embeddings into a compact cellular token per-patch using a lightweight \textit{Cell-Cell Attention Fusion (CCAF)} transformer. The projected multi-scale tokens are fused with text tokens and fed to an instruction-tuned LLM for open-ended reasoning, VQA, report, and caption generation tasks. Trained in three stages, MLLM-HWSI achieves new SOTA results on 13 WSI-level benchmarks across six CPath tasks. By aligning language with multi-scale visual evidence, MLLM-HWSI provides accurate, interpretable outputs that mirror diagnostic workflows and advance holistic WSI understanding. Code is available at: \href{https://github.com/BasitAlawode/HWSI-MLLM}{GitHub}.

标签

MLLM Whole Slide Image Computational Pathology Hierarchical Learning

arXiv 分类

cs.CV