Question-guided Visual Compression with Memory Feedback for Long-Term Video Understanding
AI 摘要
提出QViC-MF框架,利用问题引导的记忆反馈机制,提升长视频理解任务性能。
主要贡献
- 提出 Question-guided Visual Compression with Memory Feedback (QViC-MF) 框架
- 设计 Question-guided Multimodal Selective Attention (QMSA) 模块
- 在多个长视频理解任务上取得显著性能提升
方法论
通过问题引导的多模态选择性注意力机制,迭代地从当前片段和记忆中提取相关视觉信息,进行视频压缩和记忆反馈。
原文摘要
In the context of long-term video understanding with large multimodal models, many frameworks have been proposed. Although transformer-based visual compressors and memory-augmented approaches are often used to process long videos, they usually compress each frame independently and therefore fail to achieve strong performance on tasks that require understanding complete events, such as temporal ordering tasks in MLVU and VNBench. This motivates us to rethink the conventional one-way scheme from perception to memory, and instead establish a feedbackdriven process in which past visual contexts stored in the context memory can benefit ongoing perception. To this end, we propose Question-guided Visual Compression with Memory Feedback (QViC-MF), a framework for long-term video understanding. At its core is a Question-guided Multimodal Selective Attention (QMSA), which learns to preserve visual information related to the given question from both the current clip and the past related frames from the memory. The compressor and memory feedback work iteratively for each clip of the entire video. This simple yet effective design yields large performance gains on longterm video understanding tasks. Extensive experiments show that our method achieves significant improvement over current state-of-the-art methods by 6.1% on MLVU test, 8.3% on LVBench, 18.3% on VNBench Long, and 3.7% on VideoMME Long. The code will be released publicly.