Multimodal Learning 相关度: 9/10

Focus-Scan-Refine: From Human Visual Perception to Efficient Visual Token Pruning

Enwei Tong, Yuanchao Bai, Yao Zhu, Junjun Jiang, Xianming Liu
arXiv: 2602.05809v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

提出FSR框架,模拟人类视觉机制,有效剪枝VLMs中的视觉tokens,提升效率与精度。

主要贡献

  • 提出 Focus-Scan-Refine (FSR) 框架
  • 结合视觉重要性和指令相关性,聚焦关键证据
  • 通过相似性分配和加权合并,优化扫描上下文

方法论

FSR框架首先聚焦关键证据,然后扫描补充上下文,最后通过聚合相邻信息token来优化扫描结果,实现高效的token pruning。

原文摘要

Vision-language models (VLMs) often generate massive visual tokens that greatly increase inference latency and memory footprint; while training-free token pruning offers a practical remedy, existing methods still struggle to balance local evidence and global context under aggressive compression. We propose Focus-Scan-Refine (FSR), a human-inspired, plug-and-play pruning framework that mimics how humans answer visual questions: focus on key evidence, then scan globally if needed, and refine the scanned context by aggregating relevant details. FSR first focuses on key evidence by combining visual importance with instruction relevance, avoiding the bias toward visually salient but query-irrelevant regions. It then scans for complementary context conditioned on the focused set, selecting tokens that are most different from the focused evidence. Finally, FSR refines the scanned context by aggregating nearby informative tokens into the scan anchors via similarity-based assignment and score-weighted merging, without increasing the token budget. Extensive experiments across multiple VLM backbones and vision-language benchmarks show that FSR consistently improves the accuracy-efficiency trade-off over existing state-of-the-art pruning methods. The source codes can be found at https://github.com/ILOT-code/FSR

标签

Vision-Language Models Token Pruning Visual Question Answering Efficiency

arXiv 分类

cs.CV