When Thinking Hurts: Mitigating Visual Forgetting in Video Reasoning via Frame Repetition
AI 摘要
针对视频问答中视觉信息丢失问题,提出FrameRepeat框架,通过帧重复增强视觉线索。
主要贡献
- 提出FrameRepeat框架,自动识别并重复关键帧。
- 提出Add-One-In (AOI)训练策略,利用MLLM输出概率生成监督信号。
- 实验证明FrameRepeat在不同模型和数据集上的有效性和泛化性。
方法论
通过AOI训练帧评分网络,指导帧重复行为,以强化推理过程中重要的视觉线索。
原文摘要
Recently, Multimodal Large Language Models (MLLMs) have demonstrated significant potential in complex visual tasks through the integration of Chain-of-Thought (CoT) reasoning. However, in Video Question Answering, extended thinking processes do not consistently yield performance gains and may even lead to degradation due to ``visual anchor drifting'', where models increasingly rely on self-generated text, sidelining visual inputs and causing hallucinations. While existing mitigations typically introduce specific mechanisms for the model to re-attend to visual inputs during inference, these approaches often incur prohibitive training costs and suffer from poor generalizability across different architectures. To address this, we propose FrameRepeat, an automated enhancement framework which features a lightweight repeat scoring module that enables Video-LLMs to autonomously identify which frames should be reinforced. We introduce a novel training strategy, Add-One-In (AOI), that uses MLLM output probabilities to generate supervision signals representing repeat gain. This can be used to train a frame scoring network, which guides the frame repetition behavior. Experimental results across multiple models and datasets demonstrate that FrameRepeat is both effective and generalizable in strengthening important visual cues during the reasoning process.