Multimodal Learning 相关度: 9/10

Evaluating Time Awareness and Cross-modal Active Perception of Large Models via 4D Escape Room Task

Yurui Dong, Ziyue Wang, Shuyun Lu, Dairu Liu, Xuechen Liu, Fuwen Luo, Peng Li, Yang Liu
arXiv: 2603.15467v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

提出了EscapeCraft-4D环境,评估大模型在时序感知和跨模态主动感知方面的能力。

主要贡献

  • 提出了EscapeCraft-4D环境
  • 评估了现有模型在时序感知和跨模态融合方面的不足
  • 分析了多模态交互对模型决策的影响

方法论

构建4D密室逃脱环境,包含时变线索和触发式音频,评估模型在时间约束下的跨模态推理能力。

原文摘要

Multimodal Large Language Models (MLLMs) have recently made rapid progress toward unified Omni models that integrate vision, language, and audio. However, existing environments largely focus on 2D or 3D visual context and vision-language tasks, offering limited support for temporally dependent auditory signals and selective cross-modal integration, where different modalities may provide complementary or interfering information, which are essential capabilities for realistic multimodal reasoning. As a result, whether models can actively coordinate modalities and reason under time-varying, irreversible conditions remains underexplored. To this end, we introduce \textbf{EscapeCraft-4D}, a customizable 4D environment for assessing selective cross-modal perception and time awareness in Omni models. It incorporates trigger-based auditory sources, temporally transient evidence, and location-dependent cues, requiring agents to perform spatio-temporal reasoning and proactive multimodal integration under time constraints. Building on this environment, we curate a benchmark to evaluate corresponding abilities across powerful models. Evaluation results suggest that models struggle with modality bias, and reveal significant gaps in current model's ability to integrate multiple modalities under time constraints. Further in-depth analysis uncovers how multiple modalities interact and jointly influence model decisions in complex multimodal reasoning environments.

标签

多模态学习 大语言模型 时序推理 主动感知

arXiv 分类

cs.CV