Multimodal Learning 相关度: 9/10

Unleashing Spatial Reasoning in Multimodal Large Language Models via Textual Representation Guided Reasoning

Jiacheng Hua, Yishu Yin, Yuhang Wu, Tai Wang, Yifei Huang, Miao Liu
arXiv: 2603.23404v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

论文提出TRACE方法,通过文本表示引导MLLM进行3D空间推理,提升视频理解能力。

主要贡献

  • 提出TRACE提示方法,利用文本表示进行3D空间推理。
  • TRACE编码元上下文、相机轨迹和对象实体。
  • 在VSI-Bench和OST-Bench上验证了TRACE的有效性。

方法论

TRACE方法通过提示MLLM生成3D环境的文本表示作为中间推理步骤,以提高空间问答的准确性。

原文摘要

Existing Multimodal Large Language Models (MLLMs) struggle with 3D spatial reasoning, as they fail to construct structured abstractions of the 3D environment depicted in video inputs. To bridge this gap, drawing inspiration from cognitive theories of allocentric spatial reasoning, we investigate how to enable MLLMs to model and reason over text-based spatial representations of video. Specifically, we introduce Textual Representation of Allocentric Context from Egocentric Video (TRACE), a prompting method that induces MLLMs to generate text-based representations of 3D environments as intermediate reasoning traces for more accurate spatial question answering. TRACE encodes meta-context, camera trajectories, and detailed object entities to support structured spatial reasoning over egocentric videos. Extensive experiments on VSI-Bench and OST-Bench demonstrate that TRACE yields notable and consistent improvements over prior prompting strategies across a diverse range of MLLM backbones, spanning different parameter scales and training schemas. We further present ablation studies to validate our design choices, along with detailed analyses that probe the bottlenecks of 3D spatial reasoning in MLLMs.

标签

MLLM 空间推理 文本表示 视频理解

arXiv 分类

cs.CV cs.CL