AI Agents 相关度: 9/10

OmniGAIA: Towards Native Omni-Modal AI Agents

Xiaoxi Li, Wenxiang Jiao, Jiarui Jin, Shijian Wang, Guanting Dong, Jiajie Jin, Hao Wang, Yinuo Wang, Ji-Rong Wen, Yuan Lu, Zhicheng Dou
arXiv: 2602.22897v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

论文提出了OmniGAIA基准和OmniAtlas模型,旨在提升AI智能体在多模态环境下的推理和工具使用能力。

主要贡献

  • 提出了OmniGAIA基准,用于评估多模态智能体。
  • 提出了OmniAtlas模型,一个原生多模态基础智能体。
  • 使用hindsight-guided tree exploration策略和OmniDPO训练OmniAtlas。

方法论

构建OmniGAIA基准,利用多模态事件图生成复杂查询,并使用树探索策略和OmniDPO对OmniAtlas进行训练和优化。

原文摘要

Human intelligence naturally intertwines omni-modal perception -- spanning vision, audio, and language -- with complex reasoning and tool usage to interact with the world. However, current multi-modal LLMs are primarily confined to bi-modal interactions (e.g., vision-language), lacking the unified cognitive capabilities required for general AI assistants. To bridge this gap, we introduce OmniGAIA, a comprehensive benchmark designed to evaluate omni-modal agents on tasks necessitating deep reasoning and multi-turn tool execution across video, audio, and image modalities. Constructed via a novel omni-modal event graph approach, OmniGAIA synthesizes complex, multi-hop queries derived from real-world data that require cross-modal reasoning and external tool integration. Furthermore, we propose OmniAtlas, a native omni-modal foundation agent under tool-integrated reasoning paradigm with active omni-modal perception. Trained on trajectories synthesized via a hindsight-guided tree exploration strategy and OmniDPO for fine-grained error correction, OmniAtlas effectively enhances the tool-use capabilities of existing open-source models. This work marks a step towards next-generation native omni-modal AI assistants for real-world scenarios.

标签

多模态学习 AI智能体 工具使用 推理 benchmark

arXiv 分类

cs.AI cs.CL cs.CV cs.LG cs.MM