VISion On Request: Enhanced VLLM efficiency with sparse, dynamically selected, vision-language interactions
AI 摘要
VISOR通过动态稀疏视觉-语言交互提升VLLM效率,在不损失性能的前提下降低计算成本。
主要贡献
- 提出VISOR,一种稀疏视觉-语言交互方法
- 设计动态视觉计算分配策略
- 在多个benchmark上达到SOTA或更优效果
方法论
通过少量自注意力层提炼视觉表征,利用轻量级策略动态分配视觉计算,实现高效推理。
原文摘要
Existing approaches for improving the efficiency of Large Vision-Language Models (LVLMs) are largely based on the concept of visual token reduction. This approach, however, creates an information bottleneck that impairs performance, especially on challenging tasks that require fine-grained understanding and reasoning. In this work, we challenge this paradigm by introducing VISion On Request (VISOR), a method that reduces inference cost without discarding visual information. Instead of compressing the image, VISOR improves efficiency by sparsifying the interaction between image and text tokens. Specifically, the language model attends to the full set of high-resolution visual tokens through a small, strategically placed set of attention layers: general visual context is provided by efficient cross-attention between text-image, while a few well-placed and dynamically selected self-attention layers refine the visual representations themselves, enabling complex, high-resolution reasoning when needed. Based on this principle, we first train a single universal network on a range of computational budgets by varying the number of self-attention layers, and then introduce a lightweight policy mechanism that dynamically allocates visual computation based on per-sample complexity. Extensive experiments show that VISOR drastically reduces computational cost while matching or exceeding state-of-the-art results across a diverse suite of benchmarks, and excels in challenging tasks that require detailed visual understanding.