Gaia2: Benchmarking LLM Agents on Dynamic and Asynchronous Environments
AI 摘要
Gaia2是一个用于评估LLM Agent在动态异步环境中表现的基准测试。
主要贡献
- 提出了Gaia2,一个评估LLM Agent在动态异步环境中表现的基准。
- Gaia2包含时间约束、噪声、动态事件和多Agent协作等真实场景。
- 提供了write-action验证器,支持细粒度的动作级别评估和可验证奖励的强化学习。
- 评估了多个先进的闭源和开源模型,揭示了它们在不同能力上的权衡。
- 发布了Gaia2和ARE框架,促进下一代实用Agent系统的开发、测试和训练。
方法论
构建包含动态异步环境的测试场景,使用write-action验证器评估LLM Agent在不同场景下的表现。
原文摘要
We introduce Gaia2, a benchmark for evaluating large language model agents in realistic, asynchronous environments. Unlike prior static or synchronous evaluations, Gaia2 introduces scenarios where environments evolve independently of agent actions, requiring agents to operate under temporal constraints, adapt to noisy and dynamic events, resolve ambiguity, and collaborate with other agents. Each scenario is paired with a write-action verifier, enabling fine-grained, action-level evaluation and making Gaia2 directly usable for reinforcement learning from verifiable rewards. Our evaluation of state-of-the-art proprietary and open-source models shows that no model dominates across capabilities: GPT-5 (high) reaches the strongest overall score of 42% pass@1 but fails on time-sensitive tasks, Claude-4 Sonnet trades accuracy and speed for cost, Kimi-K2 leads among open-source models with 21% pass@1. These results highlight fundamental trade-offs between reasoning, efficiency, robustness, and expose challenges in closing the "sim2real" gap. Gaia2 is built on a consumer environment with the open-source Agents Research Environments platform and designed to be easy to extend. By releasing Gaia2 alongside the foundational ARE framework, we aim to provide the community with a flexible infrastructure for developing, benchmarking, and training the next generation of practical agent systems.