Cog3DMap: Multi-View Vision-Language Reasoning with 3D Cognitive Maps
AI 摘要
Cog3DMap通过构建显式3D认知地图,增强MLLM的多视角空间推理能力。
主要贡献
- 提出Cog3DMap框架,构建显式3D认知地图
- 将3D空间信息融入MLLM的输入
- 在空间推理基准测试上取得SOTA
方法论
该方法从多视角图像递归构建3D记忆,每个token都具有语义和几何信息,并基于此进行MLLM推理。
原文摘要
Precise spatial understanding from multi-view images remains a fundamental challenge for Multimodal Large Language Models (MLLMs), as their visual representations are predominantly semantic and lack explicit geometric grounding. While existing approaches augment visual tokens with geometric cues from visual geometry models, their MLLM is still required to implicitly infer the underlying 3D structure of the scene from these augmented tokens, limiting its spatial reasoning capability. To address this issue, we introduce Cog3DMap, a framework that recurrently constructs an explicit 3D memory from multi-view images, where each token is grounded in 3D space and possesses both semantic and geometric information. By feeding these tokens into the MLLM, our framework enables direct reasoning over a spatially structured 3D map, achieving state-of-the-art performance on various spatial reasoning benchmarks. Code will be made publicly available.