LLM Reasoning 相关度: 8/10

World2Mind: Cognition Toolkit for Allocentric Spatial Reasoning in Foundation Models

Shouwei Ruan, Bin Wang, Zhenyu Wu, Qihui Zhu, Yuxiang Zhang, Hang Su, Yubin Wang
arXiv: 2603.09774v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

World2Mind工具包通过构建空间认知地图提升多模态模型在三维空间推理方面的能力。

主要贡献

  • 提出 World2Mind 工具包,无需训练即可提升空间推理能力
  • 构建 Allocentric-Spatial Tree (AST) 提供几何拓扑先验
  • 提出三阶段推理链以应对三维重建的不准确性

方法论

利用3D重建和实例分割构建空间认知地图,通过AST提供先验,并采用三阶段推理链。

原文摘要

Achieving robust spatial reasoning remains a fundamental challenge for current Multimodal Foundation Models (MFMs). Existing methods either overfit statistical shortcuts via 3D grounding data or remain confined to 2D visual perception, limiting both spatial reasoning accuracy and generalization in unseen scenarios. Inspired by the spatial cognitive mapping mechanisms of biological intelligence, we propose World2Mind, a training-free spatial intelligence toolkit. At its core, World2Mind leverages 3D reconstruction and instance segmentation models to construct structured spatial cognitive maps, empowering MFMs to proactively acquire targeted spatial knowledge regarding interested landmarks and routes of interest. To provide robust geometric-topological priors, World2Mind synthesizes an Allocentric-Spatial Tree (AST) that uses elliptical parameters to model the top-down layout of landmarks accurately. To mitigate the inherent inaccuracies of 3D reconstruction, we introduce a three-stage reasoning chain comprising tool invocation assessment, modality-decoupled cue collection, and geometry-semantics interwoven reasoning. Extensive experiments demonstrate that World2Mind boosts the performance of frontier models, such as GPT-5.2, by 5%~18%. Astonishingly, relying solely on the AST-structured text, purely text-only foundation models can perform complex 3D spatial reasoning, achieving performance approaching that of advanced multimodal models.

标签

空间推理 多模态学习 工具包 认知地图

arXiv 分类

cs.AI