Multimodal Learning 相关度: 9/10

AdaptToken: Entropy-based Adaptive Token Selection for MLLM Long Video Understanding

Haozhe Qi, Kevin Qu, Mahdi Rad, Rui Wang, Alexander Mathis, Marc Pollefeys
arXiv: 2603.28696v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

AdaptToken提出一种基于熵的自适应token选择框架,用于提升MLLM长视频理解能力。

主要贡献

  • 提出基于模型不确定性的全局控制信号,用于长视频token选择。
  • 提出AdaptToken框架,通过熵估计提示相关性,进行token预算分配。
  • 提出AdaptToken-Lite,通过提前停止机制加速推理过程。

方法论

视频分割为组,提取跨模态注意力排序token,利用模型响应熵估计分组相关性,分配token预算,并支持提前停止。

原文摘要

Long video understanding remains challenging for Multi-modal Large Language Models (MLLMs) due to high memory costs and context-length limits. Prior approaches mitigate this by scoring and selecting frames/tokens within short clips, but they lack a principled mechanism to (i) compare relevance across distant video clips and (ii) stop processing once sufficient evidence has been gathered. We propose AdaptToken, a training-free framework that turns an MLLM's self-uncertainty into a global control signal for long-video token selection. AdaptToken splits a video into groups, extracts cross-modal attention to rank tokens within each group, and uses the model's response entropy to estimate each group's prompt relevance. This entropy signal enables a global token budget allocation across groups and further supports early stopping (AdaptToken-Lite), skipping the remaining groups when the model becomes sufficiently certain. Across four long-video benchmarks (VideoMME, LongVideoBench, LVBench, and MLVU) and multiple base MLLMs (7B-72B), AdaptToken consistently improves accuracy (e.g., +6.7 on average over Qwen2.5-VL 7B) and continues to benefit from extremely long inputs (up to 10K frames), while AdaptToken-Lite reduces inference time by about half with comparable performance. Project page: https://haozheqi.github.io/adapt-token

标签

MLLM Long Video Understanding Token Selection Entropy

arXiv 分类

cs.CV cs.AI