Multimodal Learning 相关度: 9/10

VRIQ: Benchmarking and Analyzing Visual-Reasoning IQ of VLMs

Tina Khezresmaeilzadeh, Jike Zhong, Konstantinos Psounis
arXiv: 2602.05382v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

VRIQ基准测试VLMs的视觉推理能力,发现感知是主要瓶颈。

主要贡献

  • 提出VRIQ基准测试,评估VLMs的视觉推理能力
  • 分析了VLMs在视觉推理上的弱点,发现感知是主要瓶颈
  • 设计了细粒度的诊断探针,揭示了特定感知类别的失败原因

方法论

构建抽象谜题和自然图像推理任务,评估VLMs性能,并设计诊断探针分析失败原因。

原文摘要

Recent progress in Vision Language Models (VLMs) has raised the question of whether they can reliably perform nonverbal reasoning. To this end, we introduce VRIQ (Visual Reasoning IQ), a novel benchmark designed to assess and analyze the visual reasoning ability of VLMs. We evaluate models on two sets of tasks: abstract puzzle-style and natural-image reasoning tasks. We find that on abstract puzzles, performance remains near random with an average accuracy of around 28%, while natural tasks yield better but still weak results with 45% accuracy. We also find that tool-augmented reasoning demonstrates only modest improvements. To uncover the source of this weakness, we introduce diagnostic probes targeting perception and reasoning. Our analysis demonstrates that around 56% of failures arise from perception alone, 43% from both perception and reasoning, and only a mere 1% from reasoning alone. This motivates us to design fine-grained diagnostic probe questions targeting specific perception categories (e.g., shape, count, position, 3D/depth), revealing that certain categories cause more failures than others. Our benchmark and analysis establish that current VLMs, even with visual reasoning tools, remain unreliable abstract reasoners, mostly due to perception limitations, and offer a principled basis for improving visual reasoning in multimodal systems.

标签

视觉推理 VLM 基准测试 感知 诊断探针

arXiv 分类

cs.CV cs.LG