Multimodal Learning 相关度: 9/10

CL-VISTA: Benchmarking Continual Learning in Video Large Language Models

Haiyang Guo, Yichen Shi, Fei Zhu, Wenzhuo Liu, Hongbo Zhao, Fanhu Zeng, Shijie Ma, Da-Han Wang, Xu-Yao Zhang
arXiv: 2604.00677v1 发布: 2026-04-01 更新: 2026-04-01

AI 摘要

CL-VISTA是为Video-LLM持续学习定制的基准,揭示了性能、效率和内存之间的权衡。

主要贡献

  • 提出了CL-VISTA基准,用于评估Video-LLM的持续学习能力。
  • 涵盖了8个多样化任务,有效暴露了灾难性遗忘问题。
  • 建立了全面的评估框架,包含6种协议和3个关键维度。
  • 对10种主流持续学习方法进行了基准测试,揭示了性能权衡。

方法论

构建包含多个视频理解任务的benchmark,并使用多种持续学习方法在不同协议下进行评估,分析性能、计算效率和内存占用。

原文摘要

Video Large Language Models (Video-LLMs) require continual learning to adapt to non-stationary real-world data. However, existing benchmarks fall short of evaluating modern foundation models: many still rely on models without large-scale pre-training, and prevailing benchmarks typically partition a single dataset into sub-tasks, resulting in high task redundancy and negligible forgetting on pre-trained Video-LLMs. To address these limitations, we propose CL-VISTA, a benchmark tailored for continual video understanding of Video-LLMs. By curating 8 diverse tasks spanning perception, understanding, and reasoning, CL-VISTA induces substantial distribution shifts that effectively expose catastrophic forgetting. To systematically assess CL methods, we establish a comprehensive evaluation framework comprising 6 distinct protocols across 3 critical dimensions: performance, computational efficiency, and memory footprint. Notably, the performance dimension incorporates a general video understanding assessment to assess whether CL methods genuinely enhance foundational intelligence or merely induce task-specific overfitting. Extensive benchmarking of 10 mainstream CL methods reveals a fundamental trade-off: no single approach achieves universal superiority across all dimensions. Methods that successfully mitigate catastrophic forgetting tend to compromise generalization or incur prohibitive computational and memory overheads. We hope CL-VISTA provides critical insights for advancing continual learning in multimodal foundation models.

标签

Video-LLM Continual Learning Benchmark Multimodal

arXiv 分类

cs.CV