$V_{0.5}$: Generalist Value Model as a Prior for Sparse RL Rollouts
AI 摘要
提出V0.5算法,通过融合通用价值模型先验和稀疏采样经验均值,构建鲁棒的advantage baseline。
主要贡献
- 提出了 V_{0.5} 算法,融合价值模型先验和稀疏 rollout 的经验均值。
- 引入实时统计测试和动态预算分配机制,平衡偏差和方差。
- 在数学推理基准测试中,性能优于 GRPO 和 DAPO。
方法论
V0.5自适应融合价值模型先验和稀疏rollout经验均值,通过实时统计测试动态调整rollout预算,最小化baseline的MSE。
原文摘要
In Reinforcement Learning with Verifiable Rewards (RLVR), constructing a robust advantage baseline is critical for policy gradients, effectively guiding the policy model to reinforce desired behaviors. Recent research has introduced Generalist Value Models (such as $V_0$), which achieve pre-trained value estimation by explicitly encoding model capabilities in-context, eliminating the need to synchronously update the value model alongside the policy model. In this paper, we propose $V_{0.5}$, which adaptively fuses the baseline predicted by such value model (acting as a prior) with the empirical mean derived from sparse rollouts. This constructs a robust baseline that balances computational efficiency with extremely low variance. Specifically, we introduce a real-time statistical testing and dynamic budget allocation. This balances the high variance caused by sparse sampling against the systematic bias (or hallucinations) inherent in the value model's prior. By constructing a hypothesis test to evaluate the prior's reliability in real-time, the system dynamically allocates additional rollout budget on demand. This mechanism minimizes the baseline estimator's Mean Squared Error (MSE), guaranteeing stable policy gradients, even under extreme sparsity with a group size of 4. Extensive evaluations across six mathematical reasoning benchmarks demonstrate that $V_{0.5}$ significantly outperforms GRPO and DAPO, achieving faster convergence and over some 10% performance improvement.