Multimodal Learning 相关度: 9/10

Perceptio: Perception Enhanced Vision Language Models via Spatial Token Generation

Yuchen Li, Amanmeet Garg, Shalini Chaudhuri, Rui Zhao, Garin Kessler
arXiv: 2603.18795v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

Perceptio通过显式的语义分割和深度token增强了LVLM的空间推理能力,并在多个基准测试中取得了SOTA。

主要贡献

  • 提出了Perceptio,一个感知增强的LVLM
  • 使用VQVAE深度编码和SAM2分割生成空间token
  • 引入了复合深度token目标和软合并技术以稳定训练

方法论

利用VQVAE和SAM2生成空间token,并将其集成到LLM中,使用多任务协同训练策略。

原文摘要

Large Vision Language Models (LVLMs) excel at semantic understanding but struggle with fine grained spatial grounding, as the model must implicitly infer complex geometry without ever producing a spatial interpretation. We present Perceptio, a perception enhanced LVLM with 2D and 3D spatial reasoning abilities, enabled via explicit semantic segmentation tokens and depth tokens generated directly within the autoregressive sequence. Concretely, we (i) distill a VQVAE depth codebook from a strong monocular teacher to tokenize dense depth into compact sequences, and (ii) integrate SAM2 based semantic segmentation tokens and VQ-VAE depth tokens inside the LLM so the model first emits spatial tokens and then answers. To stabilize depth token generation, we introduce novel composite depth-token objectives (marker, token, and count losses) and a soft-merging technique for differentiable reconstruction. We adopt a multi-task co-training strategy across diverse datasets, letting the model learn perception tokens to tackle multiple downstream tasks. Building on InternVL, Perceptio achieves state-of-the-art performance across benchmarks: improving referring expression segmentation by +0.8/+1.4/+1.1 cIoU on RefCOCO/+/g HardBLINK spatial understanding accuracy by 10.3%, and MMBench accuracy by 1.0%, demonstrating that explicit spatial chain-of-thought materially strengthens spatial grounding in LVLMs.

标签

Vision-Language Models Spatial Reasoning Semantic Segmentation

arXiv 分类

cs.CV cs.AI