$\texttt{YC-Bench}$: Benchmarking AI Agents for Long-Term Planning and Consistent Execution
AI 摘要
YC-Bench是一个评估AI Agent长期规划和执行一致性的基准测试,通过模拟创业公司运营。
主要贡献
- 提出了YC-Bench基准测试,用于评估Agent的长期规划能力。
- 评估了12个模型在模拟创业环境下的表现,揭示了现有模型的缺陷。
- 分析了失败模式,发现对抗性客户检测失败是主要原因,并强调了scratchpad的重要性。
方法论
设计模拟创业环境,Agent需管理员工、选择任务、保持盈利。通过多个种子进行测试,评估不同模型的表现。
原文摘要
As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce $\texttt{YC-Bench}$, a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of \$200K, with Claude Opus 4.6 achieving the highest average final funds at \$1.27 M, followed by GLM-5 at \$1.21 M at 11$\times$ lower inference cost. Scratchpad usage, the sole mechanism for persisting information across context truncation, is the strongest predictor of success, and adversarial client detection is the primary failure mode, accounting for $47\%$ of bankruptcies. Our analysis reveals that frontier models still fail through distinct failure modes such as over-parallelization, demonstrating the capability gaps for long-horizon performance. $\texttt{YC-Bench}$ is open-source, reproducible, and configurable.