LLM Reasoning 相关度: 8/10

Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization

Fei Bai, Zhipeng Chen, Chuan Hao, Ming Yang, Ran Tao, Bryan Dai, Wayne Xin Zhao, Jian Yang, Hongteng Xu
arXiv: 2603.24093v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

提出DGO框架,通过外部经验库和内部知识双重引导,提升LLM在RLVR训练中的利用和内化能力。

主要贡献

  • 提出DGO框架,结合外部和内部经验提升RLVR训练效果
  • 构建经验库,利用历史轨迹引导探索
  • 形成经验利用和内化的闭环

方法论

构建经验库存储历史轨迹,策略在经验库和模型内部知识的双重引导下探索,并优化经验库和模型参数。

原文摘要

Recently, reinforcement learning~(RL) has become an important approach for improving the capabilities of large language models~(LLMs). In particular, reinforcement learning from verifiable rewards~(RLVR) has emerged as a promising paradigm for reasoning tasks. However, existing RL-based training still remains only a rough approximation to human learning. Human learners leverage both external and internal experience to guide exploration and gradually internalize useful trajectories into stable knowledge. Motivated by this gap, we ask: how can LLMs better utilize and internalize experience during RLVR training? To answer this question, we propose \textbf{D}ual \textbf{G}uidance \textbf{O}ptimization~(\textbf{DGO}), a unified framework that leverages \emph{external} and \emph{internal experience} to improve training effectiveness. Specifically, DGO first constructs an experience bank from previously explored trajectories. The policy then performs exploration under the joint guidance of the experience bank and the model's internal knowledge. The resulting trajectories are further used to refine the experience bank and optimize model parameters, forming a closed loop of experience utilization and internalization. Experiments show that DGO consistently outperforms baseline methods, suggesting that better utilization and internalization of experience lead to more effective reasoning.

标签

强化学习 大型语言模型 可验证奖励 经验学习

arXiv 分类

cs.LG cs.AI