Tinted Frames: Question Framing Blinds Vision-Language Models
AI 摘要
研究表明,视觉语言模型(VLM)的视觉注意力受问题框架影响,导致性能下降和不一致。
主要贡献
- 揭示了VLM的视觉注意力受到语言框架选择性影响
- 量化了框架对图像注意力和分布的影响
- 提出了一种轻量级的提示调整方法,改善视觉基础并提高跨框架性能
方法论
使用视觉注意力作为探针,分析不同框架下VLM的注意力模式,并通过提示调整改善视觉基础。
原文摘要
Vision-Language Models (VLMs) have been shown to be blind, often underutilizing their visual inputs even on tasks that require visual reasoning. In this work, we demonstrate that VLMs are selectively blind. They modulate the amount of attention applied to visual inputs based on linguistic framing even when alternative framings demand identical visual reasoning. Using visual attention as a probe, we quantify how framing alters both the amount and distribution of attention over the image. Constrained framings, such as multiple choice and yes/no, induce substantially lower attention to image context compared to open-ended, reduce focus on task-relevant regions, and shift attention towards uninformative tokens. We further demonstrate that this attention misallocation is the principal cause of degraded accuracy and cross-framing inconsistency. Building on this mechanistic insight, we introduce a lightweight prompt-tuning method using learnable tokens that encourages the robust, visually grounded attention patterns observed in open-ended settings, improving visual grounding and improving performance across framings.