Concept-to-Pixel: Prompt-Free Universal Medical Image Segmentation
AI 摘要
C2P提出了一种无需提示的通用医学图像分割框架,利用多模态LLM进行知识蒸馏,实现跨模态的泛化。
主要贡献
- 提出 Concept-to-Pixel (C2P) 框架,实现无需提示的通用医学图像分割。
- 利用多模态LLM将医学概念蒸馏为可学习的语义token,并引入几何token来约束结构。
- 引入几何感知推理共识机制,评估预测可靠性并抑制异常值。
方法论
利用MLLM将高层医学概念转化为语义token和几何token,与图像特征交互生成动态核进行分割,并进行几何感知推理。
原文摘要
Universal medical image segmentation seeks to use a single foundational model to handle diverse tasks across multiple imaging modalities. However, existing approaches often rely heavily on manual visual prompts or retrieved reference images, which limits their automation and robustness. In addition, naive joint training across modalities often fails to address large domain shifts. To address these limitations, we propose Concept-to-Pixel (C2P), a novel prompt-free universal segmentation framework. C2P explicitly separates anatomical knowledge into two components: Geometric and Semantic representations. It leverages Multimodal Large Language Models (MLLMs) to distill abstract, high-level medical concepts into learnable Semantic Tokens and introduces explicitly supervised Geometric Tokens to enforce universal physical and structural constraints. These disentangled tokens interact deeply with image features to generate input-specific dynamic kernels for precise mask prediction. Furthermore, we introduce a Geometry-Aware Inference Consensus mechanism, which utilizes the model's predicted geometric constraints to assess prediction reliability and suppress outliers. Extensive experiments and analysis on a unified benchmark comprising eight diverse datasets across seven modalities demonstrate the significant superiority of our jointly trained approach, compared to universe- or single-model approaches. Remarkably, our unified model demonstrates strong generalization, achieving impressive results not only on zero-shot tasks involving unseen cases but also in cross-modal transfers across similar tasks. Code is available at: https://github.com/Yundi218/Concept-to-Pixel