ContextBench: A Benchmark for Context Retrieval in Coding Agents
AI 摘要
ContextBench基准测试用于评估代码Agent在问题解决中检索代码上下文的能力。
主要贡献
- 提出了ContextBench基准测试,包含1136个问题解决任务。
- 实现了自动评估框架,跟踪Agent轨迹并测量上下文召回率、精确度和效率。
- 评估了四个前沿LLM和五个代码Agent,揭示了上下文检索的瓶颈。
方法论
构建包含人工标注黄金上下文的问题解决任务集,并开发自动评估框架,通过召回率、精确度等指标评估Agent上下文检索能力。
原文摘要
LLM-based coding agents have shown strong performance on automated issue resolution benchmarks, yet existing evaluations largely focus on final task success, providing limited insight into how agents retrieve and use code context during problem solving. We introduce ContextBench, a process-oriented evaluation of context retrieval in coding agents. ContextBench consists of 1,136 issue-resolution tasks from 66 repositories across eight programming languages, each augmented with human-annotated gold contexts. We further implement an automated evaluation framework that tracks agent trajectories and measures context recall, precision, and efficiency throughout issue resolution. Using ContextBench, we evaluate four frontier LLMs and five coding agents. Our results show that sophisticated agent scaffolding yields only marginal gains in context retrieval ("The Bitter Lesson" of coding agents), LLMs consistently favor recall over precision, and substantial gaps exist between explored and utilized context. ContextBench augments existing end-to-end benchmarks with intermediate gold-context metrics that unbox the issue-resolution process. These contexts offer valuable intermediate signals for guiding LLM reasoning in software tasks. Data and code are available at: https://cioutn.github.io/context-bench/.