Beyond Static Cropping: Layer-Adaptive Visual Localization and Decoding Enhancement
AI 摘要
该论文提出了一种层自适应的视觉定位和解码增强方法,提升了视觉问答任务的性能。
主要贡献
- 提出了基于query的视觉激活度量VAQ
- 提出了层自适应的推理流程LASER
- 实验证明LASER在多种VQA任务上的有效性
方法论
通过层敏感性分析发现视觉定位的动态特性,利用VAQ选择与任务相关的层,并使用LASER自适应地增强视觉和解码。
原文摘要
Large Vision-Language Models (LVLMs) have advanced rapidly by aligning visual patches with the text embedding space, but a fixed visual-token budget forces images to be resized to a uniform pretraining resolution, often erasing fine-grained details and causing hallucinations via over-reliance on language priors. Recent attention-guided enhancement (e.g., cropping or region-focused attention allocation) alleviates this, yet it commonly hinges on a static "magic layer" empirically chosen on simple recognition benchmarks and thus may not transfer to complex reasoning tasks. In contrast to this static assumption, we propose a dynamic perspective on visual grounding. Through a layer-wise sensitivity analysis, we demonstrate that visual grounding is a dynamic process: while simple object recognition tasks rely on middle layers, complex visual search and reasoning tasks require visual information to be reactivated at deeper layers. Based on this observation, we introduce Visual Activation by Query (VAQ), a metric that identifies the layer whose attention map is most relevant to query-specific visual grounding by measuring attention sensitivity to the input query. Building on VAQ, we further propose LASER (Layer-adaptive Attention-guided Selective visual and decoding Enhancement for Reasoning), a training-free inference procedure that adaptively selects task-appropriate layers for visual localization and question answering. Experiments across diverse VQA benchmarks show that LASER significantly improves VQA accuracy across tasks with varying levels of complexity.