General Agent Evaluation
AI 摘要
论文提出了通用Agent评估框架Exgentic,并构建了首个通用Agent排行榜,推动通用Agent的研究。
主要贡献
- 提出了通用Agent评估的原则
- 设计了统一的Agent-Benchmark集成协议
- 开发了通用Agent评估框架Exgentic
- 构建了首个Open General Agent Leaderboard
方法论
设计通用Agent评估框架,构建统一协议,在多个环境对多种Agent进行基准测试并分析结果。
原文摘要
The promise of general-purpose agents - systems that perform tasks in unfamiliar environments without domain-specific engineering - remains largely unrealized. Existing agents are predominantly specialized, and while emerging implementations like OpenAI SDK Agent and Claude Code hint at broader capabilities, no systematic evaluation of their general performance has been pursued. Current agentic benchmarks assume domain-specific integration, encoding task information in ways that preclude fair evaluation of general agents. This paper frames general-agent evaluation as a first-class research objective. We propose conceptual principles for such evaluation, a Unified Protocol enabling agent-benchmark integration, and Exgentic - a practical framework for general agent evaluation. We benchmark five prominent agent implementations across six environments as the first Open General Agent Leaderboard. Our experiments show that general agents generalize across diverse environments, achieving performance comparable to domain-specific agents without any environment-specific tuning. We release our evaluation protocol, framework, and leaderboard to establish a foundation for systematic research on general-purpose agents.