OdysseyArena: Benchmarking Large Language Models For Long-Horizon, Active and Inductive Interactions
AI 摘要
提出了OdysseyArena基准,评估LLM在长程、主动和归纳交互中的能力。
主要贡献
- 提出了OdysseyArena基准
- 设计了四个原语,将抽象转换动态转化为具体的交互环境
- 评估了15+个LLM在OdysseyArena上的表现
方法论
构建OdysseyArena-Lite和OdysseyArena-Challenge,通过一系列任务评估LLM的归纳效率和长程发现能力。
原文摘要
The rapid advancement of Large Language Models (LLMs) has catalyzed the development of autonomous agents capable of navigating complex environments. However, existing evaluations primarily adopt a deductive paradigm, where agents execute tasks based on explicitly provided rules and static goals, often within limited planning horizons. Crucially, this neglects the inductive necessity for agents to discover latent transition laws from experience autonomously, which is the cornerstone for enabling agentic foresight and sustaining strategic coherence. To bridge this gap, we introduce OdysseyArena, which re-centers agent evaluation on long-horizon, active, and inductive interactions. We formalize and instantiate four primitives, translating abstract transition dynamics into concrete interactive environments. Building upon this, we establish OdysseyArena-Lite for standardized benchmarking, providing a set of 120 tasks to measure an agent's inductive efficiency and long-horizon discovery. Pushing further, we introduce OdysseyArena-Challenge to stress-test agent stability across extreme interaction horizons (e.g., > 200 steps). Extensive experiments on 15+ leading LLMs reveal that even frontier models exhibit a deficiency in inductive scenarios, identifying a critical bottleneck in the pursuit of autonomous discovery in complex environments. Our code and data are available at https://github.com/xufangzhi/Odyssey-Arena