TwiFF (Think With Future Frames): A Large-Scale Dataset for Dynamic Visual Reasoning
AI 摘要
提出了一个大规模动态视觉推理数据集TwiFF-2.7M,并提出了相应的TwiFF模型,在动态视觉问答任务上取得了显著提升。
主要贡献
- 提出了大规模动态视觉推理数据集TwiFF-2.7M
- 提出了高质量的评估基准TwiFF-Bench
- 提出了TwiFF模型,结合视频生成和图像理解能力进行动态视觉推理
方法论
构建大规模视频数据集,设计评估基准,并提出了一个结合视频生成和图像理解的TwiFF模型,迭代生成未来帧和文本推理。
原文摘要
Visual Chain-of-Thought (VCoT) has emerged as a promising paradigm for enhancing multimodal reasoning by integrating visual perception into intermediate reasoning steps. However, existing VCoT approaches are largely confined to static scenarios and struggle to capture the temporal dynamics essential for tasks such as instruction, prediction, and camera motion. To bridge this gap, we propose TwiFF-2.7M, the first large-scale, temporally grounded VCoT dataset derived from $2.7$ million video clips, explicitly designed for dynamic visual question and answer. Accompanying this, we introduce TwiFF-Bench, a high-quality evaluation benchmark of $1,078$ samples that assesses both the plausibility of reasoning trajectories and the correctness of final answers in open-ended dynamic settings. Building on these foundations, we propose the TwiFF model, a unified modal that synergistically leverages pre-trained video generation and image comprehension capabilities to produce temporally coherent visual reasoning cues-iteratively generating future action frames and textual reasoning. Extensive experiments demonstrate that TwiFF significantly outperforms existing VCoT methods and Textual Chain-of-Thought baselines on dynamic reasoning tasks, which fully validates the effectiveness for visual question answering in dynamic scenarios. Our code and data is available at https://github.com/LiuJunhua02/TwiFF.