LLM Reasoning 相关度: 9/10

Beyond KL Divergence: Policy Optimization with Flexible Bregman Divergences for LLM Reasoning

Rui Yuan, Mykola Khandoga, Vinay Kumar Sankarapu
arXiv: 2602.04380v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

提出了GBMPO框架,探索Bregman散度在LLM推理策略优化中的应用,显著提升数学推理和代码生成性能。

主要贡献

  • 提出了 Group-Based Mirror Policy Optimization (GBMPO) 框架
  • 探索了多种 Bregman 散度在策略优化中的应用,包括手动设计和神经元映射
  • 验证了 Bregman 散度选择对 LLM 推理策略优化的重要性

方法论

扩展了基于群组的策略优化方法,引入灵活的Bregman散度作为正则化项,并使用实验验证其有效性。

原文摘要

Policy optimization methods like Group Relative Policy Optimization (GRPO) and its variants have achieved strong results on mathematical reasoning and code generation tasks. Despite extensive exploration of reward processing strategies and training dynamics, all existing group-based methods exclusively use KL divergence for policy regularization, leaving the choice of divergence function unexplored. We introduce Group-Based Mirror Policy Optimization (GBMPO), a framework that extends group-based policy optimization to flexible Bregman divergences, including hand-designed alternatives (L2 in probability space) and learned neural mirror maps. On GSM8K mathematical reasoning, hand-designed ProbL2-GRPO achieves 86.7% accuracy, improving +5.5 points over the Dr. GRPO baseline. On MBPP code generation, neural mirror maps reach 60.1-60.8% pass@1, with random initialization already capturing most of the benefit. While evolutionary strategies meta-learning provides marginal accuracy improvements, its primary value lies in variance reduction ($\pm$0.2 versus $\pm$0.6) and efficiency gains (15% shorter responses on MBPP), suggesting that random initialization of neural mirror maps is sufficient for most practical applications. These results establish divergence choice as a critical, previously unexplored design dimension in group-based policy optimization for LLM reasoning.

标签

LLM Policy Optimization Bregman Divergence Reasoning Code Generation

arXiv 分类

cs.LG cs.AI