PixelPrune: Pixel-Level Adaptive Visual Token Reduction via Predictive Coding
AI 摘要
PixelPrune通过预测编码压缩,在ViT编码器前剪枝冗余像素块,加速VLM推理和训练。
主要贡献
- 提出PixelPrune,一种基于预测编码的像素级自适应视觉token剪枝方法
- PixelPrune在ViT编码器之前操作,加速整个推理pipeline
- PixelPrune无需训练,支持无损和有损压缩,且在文档和GUI任务上取得加速效果
方法论
利用图像像素的冗余性,通过预测编码识别并删除重复的像素块,减少ViT的计算量。
原文摘要
Document understanding and GUI interaction are among the highest-value applications of Vision-Language Models (VLMs), yet they impose exceptionally heavy computational burden: fine-grained text and small UI elements demand high-resolution inputs that produce tens of thousands of visual tokens. We observe that this cost is largely wasteful -- across document and GUI benchmarks, only 22--71\% of image patches are pixel-unique, the rest being exact duplicates of another patch in the same image. We propose \textbf{PixelPrune}, which exploits this pixel-level redundancy through predictive-coding-based compression, pruning redundant patches \emph{before} the Vision Transformer (ViT) encoder. Because it operates in pixel space prior to any neural computation, PixelPrune accelerates both the ViT encoder and the downstream LLM, covering the full inference pipeline. The method is training-free, requires no learnable parameters, and supports pixel-lossless compression ($τ{=}0$) as well as controlled lossy compression ($τ{>}0$). Experiments across three model scales and document and GUI benchmarks show that PixelPrune maintains competitive task accuracy while delivering up to 4.2$\times$ inference speedup and 1.9$\times$ training acceleration. Code is available at https://github.com/OPPO-Mente-Lab/PixelPrune.