Agent psychometrics: Task-level performance prediction in agentic coding benchmarks
AI 摘要
提出一种预测Agent在代码任务中表现的框架,结合IRT和任务特征,分解Agent能力。
主要贡献
- 提出基于IRT和任务特征的Agent性能预测框架
- 将Agent能力分解为LLM和scaffold能力
- 实现跨异构leaderboard的任务级性能预测
方法论
结合Item Response Theory,提取任务特征(issue、代码库、解决方案、测试用例),分解Agent能力进行预测。
原文摘要
As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, understanding which tasks will challenge agents and why becomes increasingly difficult. This is compounded by current practice: agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of agent ability into LLM and scaffold ability components. This parameterization enables us to aggregate evaluation data across heterogeneous leaderboards and accurately predict task-level performance for unseen benchmarks, as well as unseen LLM-scaffold combinations. Our methods have practical utility for benchmark designers, who can better calibrate the difficulty of their new tasks without running computationally expensive agent evaluations.