Multimodal Learning 相关度: 9/10

GroundVTS: Visual Token Sampling in Multimodal Large Language Models for Video Temporal Grounding

Rong Fan, Kaiyan Xiao, Minghao Zhu, Liuyi Wang, Kai Dai, Zhao Yang
arXiv: 2604.02093v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

GroundVTS通过查询引导的视觉Token采样,提升视频大语言模型在时序定位任务上的性能。

主要贡献

  • 提出 GroundVTS 架构,优化视频信息提取
  • 引入细粒度查询引导的视觉 Token 过滤机制
  • 采用渐进式优化策略,适应非均匀特征分布

方法论

提出 GroundVTS,通过查询引导过滤视觉 Token,并使用渐进式优化策略,增强模型对时序依赖的建模能力。

原文摘要

Video temporal grounding (VTG) is a critical task in video understanding and a key capability for extending video large language models (Vid-LLMs) to broader applications. However, existing Vid-LLMs rely on uniform frame sampling to extract video information, resulting in a sparse distribution of key frames and the loss of crucial temporal cues. To address this limitation, we propose Grounded Visual Token Sampling (GroundVTS), a Vid-LLM architecture that focuses on the most informative temporal segments. GroundVTS employs a fine-grained, query-guided mechanism to filter visual tokens before feeding them into the LLM, thereby preserving essential spatio-temporal information and maintaining temporal coherence. Futhermore, we introduce a progressive optimization strategy that enables the LLM to effectively adapt to the non-uniform distribution of visual features, enhancing its ability to model temporal dependencies and achieve precise video localization. We comprehensively evaluate GroundVTS on three standard VTG benchmarks, where it outperforms existing methods, achieving a 7.7-point improvement in mIoU for moment retrieval and 12.0-point improvement in mAP for highlight detection. Code is available at https://github.com/Florence365/GroundVTS.

标签

视频时序定位 多模态学习 大语言模型 视觉Token采样

arXiv 分类

cs.CV