Dynamics-Predictive Sampling for Active RL Finetuning of Large Reasoning Models
AI 摘要
提出DPS方法,通过预测学习动态来选择信息量大的prompt,加速LLM的RL finetuning。
主要贡献
- 提出Dynamics-Predictive Sampling (DPS) 方法
- 将prompt的解决过程建模为动态系统
- 使用贝叶斯推断估计状态分布,实现高效prompt选择
方法论
建模prompt解决过程为动态系统,利用历史奖励信号,通过贝叶斯推断预测学习动态,选择信息量大的prompt。
原文摘要
Reinforcement learning (RL) finetuning has become a key technique for enhancing the reasoning abilities of large language models (LLMs). However, its effectiveness critically depends on the selection of training data. Recent advances underscore the importance of online prompt selection methods, which typically concentrate training on partially solved or moderately challenging examples under the current policy, thereby yielding more effective model updates. While significantly accelerating RL finetuning in terms of training steps, they also incur substantial computational overhead by requiring extensive LLM rollouts over large candidate batches to identify informative samples, an expense that can outweigh the finetuning process itself. To address this challenge, this work proposes Dynamics-Predictive Sampling (DPS), which online predicts and selects informative prompts by inferring their learning dynamics prior to costly rollouts. Specifically, we introduce a new perspective by modeling each prompt's solving progress during RL finetuning as a dynamical system, where the extent of solving is represented as the state and the transition is characterized by a hidden Markov model. Using historical rollout reward signals, we perform online Bayesian inference to estimate evolving state distributions, and the inference outcome provides a predictive prior for efficient prompt selection without rollout-intensive filtering. Empirical results across diverse reasoning tasks, including mathematics, planning, and visual geometry, demonstrate that DPS substantially reduces redundant rollouts, accelerates the training process, and achieves superior reasoning performance.