Multimodal Learning 相关度: 9/10

PIO-FVLM: Rethinking Training-Free Visual Token Reduction for VLM Acceleration from an Inference-Objective Perspective

Haokui Zhang, Congyang Ou, Dawei Yan, Peng Wang, Qingsen Yan, Ying Li, Rong Xiao, Chunhua Shen
arXiv: 2602.04657v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

PIO-FVLM通过目标导向的视觉token缩减加速VLM推理,保持性能的同时显著提升效率。

主要贡献

  • 提出了一种训练无关的视觉token缩减方法PIO-FVLM
  • 利用层局部代理损失指导token重要性排序
  • 兼容FlashAttention,易于实际部署

方法论

利用层局部代理损失生成token级梯度显著性,指导token重排序,并使用NMS选择最重要的token,实现视觉token压缩。

原文摘要

Recently, reducing redundant visual tokens in vision-language models (VLMs) to accelerate VLM inference has emerged as a hot topic. However, most existing methods rely on heuristics constructed based on inter-visual-token similarity or cross-modal visual-text similarity, which gives rise to certain limitations in compression performance and practical deployment. In contrast, we propose PIO-FVLM from the perspective of inference objectives, which transforms visual token compression into preserving output result invariance and selects tokens primarily by their importance to this goal. Specially, vision tokens are reordered with the guidance of token-level gradient saliency generated by our designed layer-local proxy loss, a coarse constraint from the current layer to the final result. Then the most valuable vision tokens are selected following the non-maximum suppression (NMS) principle. The proposed PIO-FVLM is training-free and compatible with FlashAttention, friendly to practical application and deployment. It can be deployed independently as an encoder-free method, or combined with encoder compression approaches like VisionZip for use as an encoder-involved method. On LLaVA-Next-7B, PIO-FVLM retains just 11.1% of visual tokens but maintains 97.2% of the original performance, with a 2.67$\times$ prefill speedup, 2.11$\times$ inference speedup, 6.22$\times$ lower FLOPs, and 6.05$\times$ reduced KV Cache overhead. Our code is available at https://github.com/ocy1/PIO-FVLM.

标签

VLM Token Reduction Inference Acceleration Multimodal Learning

arXiv 分类

cs.CV