Multimodal Learning 相关度: 9/10

Boosting MLLM Spatial Reasoning with Geometrically Referenced 3D Scene Representations

Jiangye Yuan, Gowri Kumar, Baoyuan Wang
arXiv: 2603.08592v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

论文提出GR3D方法,增强MLLM对三维空间推理能力,无需额外训练,提升了zero-shot性能。

主要贡献

  • 提出GR3D几何参考3D场景表示方法
  • GR3D提升MLLM在3D空间推理任务的性能
  • GR3D无需额外训练,适用于不同MLLM

方法论

GR3D用唯一ID标注图像中的对象,并将3D几何属性编码为文本参考,MLLM利用这些参考进行推理。

原文摘要

While Multimodal Large Language Models (MLLMs) have achieved remarkable success in 2D visual understanding, their ability to reason about 3D space remains limited. To address this gap, we introduce geometrically referenced 3D scene representations (GR3D). Given a set of input images, GR3D annotates objects in the images with unique IDs and encodes their 3D geometric attributes as textual references indexed by these IDs. This representation enables MLLMs to interpret 3D cues using their advanced language-based skills in mathematical reasoning, while concurrently analyzing 2D visual features in a tightly coupled way. We present a simple yet effective approach based on GR3D, which requires no additional training and is readily applicable to different MLLMs. Implemented in a zero-shot setting, our approach boosts GPT-5's performance on VSI-Bench by 8% overall and more than 11% on tasks that rely heavily on spatial layout understanding. Qualitative studies further demonstrate that GR3D empowers MLLMs to perform complex spatial reasoning with highly sparse input views.

标签

MLLM 3D Reasoning Spatial Reasoning Scene Representation

arXiv 分类

cs.CV