HiSpatial: Taming Hierarchical 3D Spatial Understanding in Vision-Language Models
AI 摘要
HiSpatial提出分层框架提升VLM的3D空间理解能力,并构建数据集和RGB-D VLM,在多个基准测试中达到SOTA。
主要贡献
- 提出分层框架分解3D空间理解任务
- 构建大规模3D空间VQA数据集
- 开发RGB-D VLM并融合度量比例点云
方法论
构建分层任务体系,自动生成数据集进行监督微调,并使用RGB-D VLM引入3D信息。
原文摘要
Achieving human-like spatial intelligence for vision-language models (VLMs) requires inferring 3D structures from 2D observations, recognizing object properties and relations in 3D space, and performing high-level spatial reasoning. In this paper, we propose a principled hierarchical framework that decomposes the learning of 3D spatial understanding in VLMs into four progressively complex levels, from geometric perception to abstract spatial reasoning. Guided by this framework, we construct an automated pipeline that processes approximately 5M images with over 45M objects to generate 3D spatial VQA pairs across diverse tasks and scenes for VLM supervised fine-tuning. We also develop an RGB-D VLM incorporating metric-scale point maps as auxiliary inputs to further enhance spatial understanding. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on multiple spatial understanding and reasoning benchmarks, surpassing specialized spatial models and large proprietary systems such as Gemini-2.5-pro and GPT-5. Moreover, our analysis reveals clear dependencies among hierarchical task levels, offering new insights into how multi-level task design facilitates the emergence of 3D spatial intelligence.