Multimodal Learning 相关度: 9/10

Saliency-Aware Multi-Route Thinking: Revisiting Vision-Language Reasoning

Mingjia Shi, Yinhan He, Yaochen Zhu, Jundong Li
arXiv: 2602.16702v1 发布: 2026-02-18 更新: 2026-02-18

AI 摘要

提出一种基于显著性感知的多路径推理方法SAP,解决视觉语言模型推理中视觉信息利用不足的问题。

主要贡献

  • 提出显著性感知原则(SAP)用于视觉语言推理
  • 支持多路径推理,并行探索不同推理行为
  • 模型无关且数据无关,无需额外训练

方法论

SAP基于高层次推理原则,利用显著性信息引导视觉信息的重复利用,并支持并行探索多种推理路径。

原文摘要

Vision-language models (VLMs) aim to reason by jointly leveraging visual and textual modalities. While allocating additional inference-time computation has proven effective for large language models (LLMs), achieving similar scaling in VLMs remains challenging. A key obstacle is that visual inputs are typically provided only once at the start of generation, while textual reasoning (e.g., early visual summaries) is generated autoregressively, causing reasoning to become increasingly text-dominated and allowing early visual grounding errors to accumulate. Moreover, vanilla guidance for visual grounding during inference is often coarse and noisy, making it difficult to steer reasoning over long texts. To address these challenges, we propose \emph{Saliency-Aware Principle} (SAP) selection. SAP operates on high-level reasoning principles rather than token-level trajectories, which enable stable control over discrete generation under noisy feedback while allowing later reasoning steps to re-consult visual evidence when renewed grounding is required. In addition, SAP supports multi-route inference, enabling parallel exploration of diverse reasoning behaviors. SAP is model-agnostic and data-free, requiring no additional training. Empirical results show that SAP achieves competitive performance, especially in reducing object hallucination, under comparable token-generation budgets while yielding more stable reasoning and lower response latency than CoT-style long sequential reasoning.

标签

视觉语言模型 推理 显著性感知 多路径推理

arXiv 分类

cs.CV