Multimodal Learning 相关度: 9/10

Think While Watching: Online Streaming Segment-Level Memory for Multi-Turn Video Reasoning in Multimodal Large Language Models

Lu Wang, Zhuoran Jin, Yupu Hao, Yubo Chen, Kang Liu, Yulong Ao, Jun Zhao
arXiv: 2603.11896v1 发布: 2026-03-12 更新: 2026-03-12

AI 摘要

提出Think While Watching框架,提升MLLM在连续视频流上的多轮推理能力,并减少输出token。

主要贡献

  • 提出了Memory-anchored流式视频推理框架
  • 构建了三阶段多轮CoT数据集并采用阶段匹配训练策略
  • 设计了高效的并行pipeline和自适应注意力后端

方法论

构建分段记忆,并行感知与生成,通过流式因果掩码和位置编码保证因果性,并采用自适应注意力机制。

原文摘要

Multimodal large language models (MLLMs) have shown strong performance on offline video understanding, but most are limited to offline inference or have weak online reasoning, making multi-turn interaction over continuously arriving video streams difficult. Existing streaming methods typically use an interleaved perception-generation paradigm, which prevents concurrent perception and generation and leads to early memory decay as streams grow, hurting long-range dependency modeling. We propose Think While Watching, a memory-anchored streaming video reasoning framework that preserves continuous segment-level memory during multi-turn interaction. We build a three-stage, multi-round chain-of-thought dataset and adopt a stage-matched training strategy, while enforcing strict causality through a segment-level streaming causal mask and streaming positional encoding. During inference, we introduce an efficient pipeline that overlaps watching and thinking and adaptively selects the best attention backend. Under both single-round and multi-round streaming input protocols, our method achieves strong results. Built on Qwen3-VL, it improves single-round accuracy by 2.6% on StreamingBench and by 3.79% on OVO-Bench. In the multi-round setting, it maintains performance while reducing output tokens by 56%. Code is available at: https://github.com/wl666hhh/Think_While_Watching/

标签

MLLM Video Reasoning Streaming Multi-turn Interaction

arXiv 分类

cs.CV cs.AI cs.CL