Multimodal Learning 相关度: 9/10

Vista: Scene-Aware Optimization for Streaming Video Question Answering under Post-Hoc Queries

Haocheng Lu, Nan Zhang, Wei Tao, Xiaoyang Qu, Guokuan Li, Jiguang Wan, Jianzong Wang
arXiv: 2602.08448v1 发布: 2026-02-09 更新: 2026-02-09

AI 摘要

Vista针对流式视频QA,提出场景感知的优化方案,实现高效且可扩展的推理。

主要贡献

  • 场景感知分割
  • 场景感知压缩
  • 场景感知召回

方法论

动态分割视频帧为场景单元,压缩场景为token表示,并根据查询选择性召回相关场景。

原文摘要

Streaming video question answering (Streaming Video QA) poses distinct challenges for multimodal large language models (MLLMs), as video frames arrive sequentially and user queries can be issued at arbitrary time points. Existing solutions relying on fixed-size memory or naive compression often suffer from context loss or memory overflow, limiting their effectiveness in long-form, real-time scenarios. We present Vista, a novel framework for scene-aware streaming video QA that enables efficient and scalable reasoning over continuous video streams. The innovation of Vista can be summarized in three aspects: (1) scene-aware segmentation, where Vista dynamically clusters incoming frames into temporally and visually coherent scene units; (2) scene-aware compression, where each scene is compressed into a compact token representation and stored in GPU memory for efficient index-based retrieval, while full-resolution frames are offloaded to CPU memory; and (3) scene-aware recall, where relevant scenes are selectively recalled and reintegrated into the model input upon receiving a query, enabling both efficiency and completeness. Vista is model-agnostic and integrates seamlessly with a variety of vision-language backbones, enabling long-context reasoning without compromising latency or memory efficiency. Extensive experiments on StreamingBench demonstrate that Vista achieves state-of-the-art performance, establishing a strong baseline for real-world streaming video understanding.

标签

流式视频QA 多模态学习 场景感知

arXiv 分类

cs.CV cs.AI