STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens
AI 摘要
STAPO通过屏蔽稀疏token梯度更新,稳定强化学习过程,提升LLM推理能力。
主要贡献
- 识别并定义了导致训练不稳定的稀疏token
- 提出了STAPO算法,通过屏蔽稀疏token的梯度更新来稳定训练
- 实验证明STAPO在数学推理任务中优于现有方法
方法论
分析token概率与梯度关系,发现稀疏token导致训练崩溃,提出STAPO屏蔽稀疏token梯度并重归一化损失。
原文摘要
Reinforcement Learning (RL) has significantly improved large language model reasoning, but existing RL fine-tuning methods rely heavily on heuristic techniques such as entropy regularization and reweighting to maintain stability. In practice, they often experience late-stage performance collapse, leading to degraded reasoning quality and unstable training. We derive that the magnitude of token-wise policy gradients in RL is negatively correlated with token probability and local policy entropy. Building on this result, we prove that training instability is driven by a tiny fraction of tokens, approximately 0.01\%, which we term \emph{spurious tokens}. When such tokens appear in correct responses, they contribute little to the reasoning outcome but inherit the full sequence-level reward, leading to abnormally amplified gradient updates. Motivated by this observation, we propose Spurious-Token-Aware Policy Optimization (STAPO) for large-scale model refining, which selectively masks such updates and renormalizes the loss over valid tokens. Across six mathematical reasoning benchmarks using Qwen 1.7B, 8B, and 14B base models, STAPO consistently demonstrates superior entropy stability and achieves an average performance improvement of 7.13\% over GRPO, 20-Entropy and JustRL.