Multimodal Learning 相关度: 9/10

More than the Sum: Panorama-Language Models for Adverse Omni-Scenes

Weijia Fan, Ruiping Liu, Jiale Wei, Yufan Chen, Junwei Zheng, Zichao Zeng, Jiaming Zhang, Qiufu Li, Linlin Shen, Rainer Stiefelhagen
arXiv: 2603.09573v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

提出全景语言模型PLM,用于理解复杂全景场景,超越传统多视角拼接。

主要贡献

  • 提出全景语言模型范式PLM
  • 构建大规模全景VQA数据集PanoVQA
  • 设计即插即用全景稀疏注意力模块

方法论

提出PLM,利用全景稀疏注意力模块增强现有VLM,使其能处理全景图,并进行视觉问答。

原文摘要

Existing vision-language models (VLMs) are tailored for pinhole imagery, stitching multiple narrow field-of-view inputs to piece together a complete omni-scene understanding. Yet, such multi-view perception overlooks the holistic spatial and contextual relationships that a single panorama inherently preserves. In this work, we introduce the Panorama-Language Modeling (PLM)paradigm, a unified $360^\circ$ vision-language reasoning that is more than the sum of its pinhole counterparts. Besides, we present PanoVQA, a large-scale panoramic VQA dataset that involves adverse omni-scenes, enabling comprehensive reasoning under object occlusions and driving accidents. To establish a foundation for PLM, we develop a plug-and-play panoramic sparse attention module that allows existing pinhole-based VLMs to process equirectangular panoramas without retraining. Extensive experiments demonstrate that our PLM achieves superior robustness and holistic reasoning under challenging omni-scenes, yielding understanding greater than the sum of its narrow parts. Project page: https://github.com/InSAI-Lab/PanoVQA.

标签

全景视觉 视觉语言模型 VQA 全景稀疏注意力

arXiv 分类

cs.CV