Rethinking MLLM Itself as a Segmenter with a Single Segmentation Token
AI 摘要
提出了一种无需额外解码器的MLLM图像分割方法SELF1E,通过单一分割token实现高效分割。
主要贡献
- 提出SELF1E,一种基于单一分割token的MLLM分割方法
- 通过保留原始分辨率特征并融入残差特征,提升特征精度
- 结合像素重排操作和双重注意力机制,增强特征交互
方法论
保留高分辨率图像特征,融入LLM处理后的残差特征,利用像素重排和双重注意力机制,实现decoder-free分割。
原文摘要
Recent segmentation methods leveraging Multi-modal Large Language Models (MLLMs) have shown reliable object-level segmentation and enhanced spatial perception. However, almost all previous methods predominantly rely on specialist mask decoders to interpret masks from generated segmentation-related embeddings and visual features, or incorporate multiple additional tokens to assist. This paper aims to investigate whether and how we can unlock segmentation from MLLM itSELF with 1 segmentation Embedding (SELF1E) while achieving competitive results, which eliminates the need for external decoders. To this end, our approach targets the fundamental limitation of resolution reduction in pixel-shuffled image features from MLLMs. First, we retain image features at their original uncompressed resolution, and refill them with residual features extracted from MLLM-processed compressed features, thereby improving feature precision. Subsequently, we integrate pixel-unshuffle operations on image features with and without LLM processing, respectively, to unleash the details of compressed features and amplify the residual features under uncompressed resolution, which further enhances the resolution of refilled features. Moreover, we redesign the attention mask with dual perception pathways, i.e., image-to-image and image-to-segmentation, enabling rich feature interaction between pixels and the segmentation token. Comprehensive experiments across multiple segmentation tasks validate that SELF1E achieves performance competitive with specialist mask decoder-based methods, demonstrating the feasibility of decoder-free segmentation in MLLMs. Project page: https://github.com/ANDYZAQ/SELF1E.