VIEW2SPACE: Studying Multi-View Visual Reasoning from Sparse Observations
AI 摘要
研究稀疏多视角视觉推理,提出VIEW2SPACE基准和Grounded Chain-of-Thought方法。
主要贡献
- 提出 VIEW2SPACE 多视角推理基准数据集
- 设计 Grounded Chain-of-Thought with Visual Evidence 方法
- 分析模型在多视角推理中的表现和瓶颈
方法论
利用物理引擎生成高质量3D场景数据,构建大规模问答对,并提出基于视觉证据的链式思考方法。
原文摘要
Multi-view visual reasoning is essential for intelligent systems that must understand complex environments from sparse and discrete viewpoints, yet existing research has largely focused on single-image or temporally dense video settings. In real-world scenarios, reasoning across views requires integrating partial observations without explicit guidance, while collecting large-scale multi-view data with accurate geometric and semantic annotations remains challenging. To address this gap, we leverage physically grounded simulation to construct diverse, high-fidelity 3D scenes with precise per-view metadata, enabling scalable data generation that remains transferable to real-world settings. Based on this engine, we introduce VIEW2SPACE, a multi-dimensional benchmark for sparse multi-view reasoning, together with a scalable, disjoint training split supporting millions of grounded question-answer pairs. Using this benchmark, a comprehensive evaluation of state-of-the-art vision-language and spatial models reveals that multi-view reasoning remains largely unsolved, with most models performing only marginally above random guessing. We further investigate whether training can bridge this gap. Our proposed Grounded Chain-of-Thought with Visual Evidence substantially improves performance under moderate difficulty, and generalizes to real-world data, outperforming existing approaches in cross-dataset evaluation. We further conduct difficulty-aware scaling analyses across model size, data scale, reasoning depth, and visibility constraints, indicating that while geometric perception can benefit from scaling under sufficient visibility, deep compositional reasoning across sparse views remains a fundamental challenge.