Multimodal Learning 相关度: 9/10

MultihopSpatial: Multi-hop Compositional Spatial Reasoning Benchmark for Vision-Language Model

Youngwan Lee, Soojin Jang, Yoorhim Cho, Seunghwan Lee, Yong-Ju Lee, Sung Ju Hwang
arXiv: 2603.18892v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

提出了用于评估视觉语言模型多跳空间推理能力的MultihopSpatial基准。

主要贡献

  • 多跳组合空间推理基准MultihopSpatial
  • 评估推理和视觉定位的Acc@50IoU指标
  • 用于提升空间智能的MultihopSpatial-Train训练集

方法论

构建包含多跳空间推理查询的数据集,设计新的评估指标,并通过强化学习进行后训练来提升模型性能。

原文摘要

Spatial reasoning is foundational for Vision-Language Models (VLMs), particularly when deployed as Vision-Language-Action (VLA) agents in physical environments. However, existing benchmarks predominantly focus on elementary, single-hop relations, neglecting the multi-hop compositional reasoning and precise visual grounding essential for real-world scenarios. To address this, we introduce MultihopSpatial, offering three key contributions: (1) A comprehensive benchmark designed for multi-hop and compositional spatial reasoning, featuring 1- to 3-hop complex queries across diverse spatial perspectives. (2) Acc@50IoU, a complementary metric that simultaneously evaluates reasoning and visual grounding by requiring both answer selection and precise bounding box prediction - capabilities vital for robust VLA deployment. (3) MultihopSpatial-Train, a dedicated large-scale training corpus to foster spatial intelligence. Extensive evaluation of 37 state-of-the-art VLMs yields eight key insights, revealing that compositional spatial reasoning remains a formidable challenge. Finally, we demonstrate that reinforcement learning post-training on our corpus enhances both intrinsic VLM spatial reasoning and downstream embodied manipulation performance.

标签

视觉语言模型 空间推理 多跳推理 视觉定位 基准测试

arXiv 分类

cs.CV cs.AI