Multimodal Learning 相关度: 9/10

360° Image Perception with MLLMs: A Comprehensive Benchmark and a Training-Free Method

Huyen T. T. Tran, Van-Quang Nguyen, Farros Alferro, Kang-Jun Liu, Takayuki Okatani
arXiv: 2603.16179v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

针对MLLM在360°图像理解的不足,提出了360Bench基准测试和无训练的Free360框架。

主要贡献

  • 提出了360Bench,一个高分辨率360°图像VQA基准。
  • 系统评估了MLLM和增强方法在360°图像理解方面的能力。
  • 提出了Free360,一个无训练的基于场景图的框架,用于提升360° VQA效果。

方法论

Free360分解推理过程,应用自适应球形图像变换,并将结果集成到统一的图表示中以生成答案。

原文摘要

Multimodal Large Language Models (MLLMs) have shown impressive abilities in understanding and reasoning over conventional images. However, their perception of 360° images remains largely underexplored. Unlike conventional images, 360° images capture the entire surrounding environment, enabling holistic spatial reasoning but introducing challenges such as geometric distortion and complex spatial relations. To comprehensively assess MLLMs' capabilities to perceive 360° images, we introduce 360Bench, a Visual Question Answering (VQA) benchmark featuring 7K-resolution 360° images, seven representative (sub)tasks with annotations carefully curated by human annotators. Using 360Bench, we systematically evaluate seven MLLMs and six enhancement methods, revealing their shortcomings in 360° image perception. To address these challenges, we propose Free360, a training-free scene-graph-based framework for high-resolution 360° VQA. Free360 decomposes the reasoning process into modular steps, applies adaptive spherical image transformations to 360° images tailored to each step, and seamlessly integrates the resulting information into a unified graph representation for answer generation. Experiments show that Free360 consistently improves its base MLLM and provides a strong training-free solution for 360° VQA tasks. The source code and dataset will be publicly released upon acceptance.

标签

360° Image Multimodal Large Language Models Visual Question Answering

arXiv 分类

cs.CV cs.AI