Post-Training Local LLM Agents for Linux Privilege Escalation with Verifiable Rewards
AI 摘要
提出一种两阶段后训练流程,用于提升小型本地LLM在Linux提权任务中的性能,接近大型模型。
主要贡献
- 提出两阶段后训练流程(SFT+RL)
- 在Linux提权任务上实现了可验证的奖励机制
- 训练出高性能的本地模型PrivEsc-LLM
方法论
使用程序生成的提权环境数据,先进行有监督微调,然后使用可验证奖励进行强化学习。
原文摘要
LLM agents are increasingly relevant to research domains such as vulnerability discovery. Yet, the strongest systems remain closed and cloud-only, making them resource-intensive, difficult to reproduce, and unsuitable for work involving proprietary code or sensitive data. Consequently, there is an urgent need for small, local models that can perform security tasks under strict resource budgets, but methods for developing them remain underexplored. In this paper, we address this gap by proposing a two-stage post-training pipeline. We focus on the problem of Linux privilege escalation, where success is automatically verifiable and the task requires multi-step interactive reasoning. Using an experimental setup that prevents data leakage, we post-train a 4B model in two stages: supervised fine-tuning on traces from procedurally generated privilege-escalation environments, followed by reinforcement learning with verifiable rewards. On a held-out benchmark of 12 Linux privilege-escalation scenarios, supervised fine-tuning alone more than doubles the baseline success rate at 20 rounds, and reinforcement learning further lifts our resulting model, PrivEsc-LLM, to 95.8%, nearly matching Claude Opus 4.6 at 97.5%. At the same time, the expected inference cost per successful escalation is reduced by over 100x.