SAGE: Multi-Agent Self-Evolution for LLM Reasoning
AI 摘要
SAGE提出一种多智能体自进化框架,提升LLM在数学和代码生成方面的推理能力。
主要贡献
- 提出SAGE框架,利用自进化智能体提升LLM推理能力
- 设计Challenger, Planner, Solver, Critic四个智能体协同进化
- 在数学和代码生成基准测试上验证了SAGE的有效性
方法论
SAGE利用四个智能体Challenger, Planner, Solver, Critic构成闭环,通过强化学习和外部验证器共同提升LLM推理能力。
原文摘要
Reinforcement learning with verifiable rewards improves reasoning in large language models (LLMs), but many methods still rely on large human-labeled datasets. While self-play reduces this dependency, it often lacks explicit planning and strong quality control, limiting stability in long-horizon multi-step reasoning. We present SAGE (Self-evolving Agents for Generalized reasoning Evolution), a closed-loop framework where four agents: Challenger, Planner, Solver, and Critic, co-evolve from a shared LLM backbone using only a small seed set. The Challenger continuously generates increasingly difficult tasks; the Planner converts each task into a structured multi-step plan; and the Solver follows the plan to produce an answer, whose correctness is determined by external verifiers. The Critic scores and filters both generated questions and plans to prevent curriculum drift and maintain training signal quality, enabling stable self-training. Across mathematics and code-generation benchmarks, SAGE delivers consistent gains across model scales, improving the Qwen-2.5-7B model by 8.9% on LiveCodeBench and 10.7% on OlympiadBench.