Capture the Flags: Family-Based Evaluation of Agentic LLMs via Semantics-Preserving Transformations
AI 摘要
提出Evolve-CTF工具,通过语义保持转换生成CTF挑战家族,评估Agentic LLM的鲁棒性。
主要贡献
- 提出CTF挑战家族的概念
- 开发了Evolve-CTF工具
- 评估了13个Agentic LLM在CTF挑战家族上的表现
方法论
使用Evolve-CTF对CTF挑战进行语义保持转换,生成挑战家族,并使用这些家族评估Agentic LLM。
原文摘要
Agentic large language models (LLMs) are increasingly evaluated on cybersecurity tasks using capture-the-flag (CTF) benchmarks. However, existing pointwise benchmarks have limited ability to shed light on the robustness and generalisation abilities of agents across alternative versions of the source code. We introduce CTF challenge families, whereby a single CTF is used as the basis for generating a family of semantically-equivalent challenges via semantics-preserving program transformations. This enables controlled evaluation of agent robustness to source code transformations while keeping the underlying exploit strategy fixed. We introduce a new tool, Evolve-CTF, that generates CTF families from Python challenges using a range of transformations. Using Evolve-CTF to derive families from Cybench and Intercode challenges, we evaluate 13 agentic LLM configurations with tool access. We find that models are remarkably robust to intrusive renaming and code insertion-based transformations, but that composed transformations and deeper obfuscation affect performance by requiring more sophisticated use of tools. We also find that enabling explicit reasoning has little effect on solution success rates across challenge families. Our work contributes a valuable technique and tool for future LLM evaluations, and a large dataset characterising the capabilities of current state-of-the-art models in this domain.