Multimodal Learning 相关度: 9/10

ForestPrune: High-ratio Visual Token Compression for Video Multimodal Large Language Models via Spatial-Temporal Forest Modeling

Shaobo Ju, Baiyang Song, Tao Chen, Jiapeng Zhang, Qiong Wu, Chao Chang, HuaiXi Wang, Yiyi Zhou, Rongrong Ji
arXiv: 2603.22911v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

ForestPrune通过时空森林建模实现视频MLLM高比例视觉token压缩。

主要贡献

  • 提出ForestPrune,一种新颖的视频MLLM token剪枝方法。
  • 通过时空森林建模实现高效且高比例的token剪枝。
  • 在LLaVA-Video和LLaVA-OneVision上验证了ForestPrune的有效性和优越性。

方法论

构建基于语义、空间和时间约束的token森林,评估token树和节点的重要性,从而做出全局最优的剪枝决策。

原文摘要

Due to the great saving of computation and memory overhead, token compression has become a research hot-spot for MLLMs and achieved remarkable progress in image-language tasks. However, for the video, existing methods still fall short of high-ratio token compression. We attribute this shortcoming to the insufficient modeling of temporal and continual video content, and propose a novel and training-free token pruning method for video MLLMs, termed ForestPrune, which achieves effective and high-ratio pruning via Spatial-temporal Forest Modeling. In practice, ForestPrune construct token forests across video frames based on the semantic, spatial and temporal constraints, making an overall comprehension of videos. Afterwards, ForestPrune evaluates the importance of token trees and nodes based on tree depth and node roles, thereby obtaining a globally optimal pruning decision. To validate ForestPrune, we apply it to two representative video MLLMs, namely LLaVA-Video and LLaVA-OneVision, and conduct extensive experiments on a bunch of video benchmarks. The experimental results not only show the great effectiveness for video MLLMs, e.g., retaining 95.8% average accuracy while reducing 90% tokens for LLaVA-OneVision, but also show its superior performance and efficiency than the compared token compression methods, e.g., +10.1% accuracy on MLVU and -81.4% pruning time than FrameFusion on LLaVA-Video.

标签

视频MLLM Token压缩 时空建模 剪枝

arXiv 分类

cs.CV cs.AI