Learning to Self-Evolve
AI 摘要
LSE框架训练LLM在测试时通过强化学习改进上下文,提升性能。
主要贡献
- 提出Learning to Self-Evolve (LSE)框架
- 将多步上下文演化问题转化为单步RL目标
- LSE训练的模型超越GPT-5和Claude Sonnet 4.5
方法论
使用强化学习训练LLM,通过上下文编辑提升下游任务表现,结合树搜索指导演化过程。
原文摘要
We introduce Learning to Self-Evolve (LSE), a reinforcement learning framework that trains large language models (LLMs) to improve their own contexts at test time. We situate LSE in the setting of test-time self-evolution, where a model iteratively refines its context from feedback on seen problems to perform better on new ones. Existing approaches rely entirely on the inherent reasoning ability of the model and never explicitly train it for this task. LSE reduces the multi-step evolution problem to a single-step RL objective, where each context edit is rewarded by the improvement in downstream performance. We pair this objective with a tree-guided evolution loop. On Text-to-SQL generation (BIRD) and general question answering (MMLU-Redux), a 4B-parameter model trained with LSE outperforms self-evolving policies powered by GPT-5 and Claude Sonnet 4.5, as well as prompt optimization methods including GEPA and TextGrad, and transfers to guide other models without additional training. Our results highlight the effectiveness of treating self-evolution as a learnable skill.