AI Agents 相关度: 8/10

MA-EgoQA: Question Answering over Egocentric Videos from Multiple Embodied Agents

Kangsan Kim, Yanlai Yang, Suji Kim, Woongyeong Yeo, Youngwan Lee, Mengye Ren, Sung Ju Hwang
arXiv: 2603.09827v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

提出MA-EgoQA基准,用于评估多智能体环境下第一人称视频理解的问答能力。

主要贡献

  • 定义多智能体第一人称视频理解的问答问题
  • 构建MA-EgoQA基准数据集
  • 提出EgoMAS基线模型

方法论

构建共享记忆,利用智能体动态检索机制,提出EgoMAS模型作为基线。

原文摘要

As embodied models become powerful, humans will collaborate with multiple embodied AI agents at their workplace or home in the future. To ensure better communication between human users and the multi-agent system, it is crucial to interpret incoming information from agents in parallel and refer to the appropriate context for each query. Existing challenges include effectively compressing and communicating high volumes of individual sensory inputs in the form of video and correctly aggregating multiple egocentric videos to construct system-level memory. In this work, we first formally define a novel problem of understanding multiple long-horizon egocentric videos simultaneously collected from embodied agents. To facilitate research in this direction, we introduce MultiAgent-EgoQA (MA-EgoQA), a benchmark designed to systemically evaluate existing models in our scenario. MA-EgoQA provides 1.7k questions unique to multiple egocentric streams, spanning five categories: social interaction, task coordination, theory-of-mind, temporal reasoning, and environmental interaction. We further propose a simple baseline model for MA-EgoQA named EgoMAS, which leverages shared memory across embodied agents and agent-wise dynamic retrieval. Through comprehensive evaluation across diverse baselines and EgoMAS on MA-EgoQA, we find that current approaches are unable to effectively handle multiple egocentric streams, highlighting the need for future advances in system-level understanding across the agents. The code and benchmark are available at https://ma-egoqa.github.io.

标签

多智能体 第一人称视频 问答 具身智能

arXiv 分类

cs.CV cs.AI