Agent Tuning & Optimization 相关度: 8/10

$f$-GRPO and Beyond: Divergence-Based Reinforcement Learning Algorithms for General LLM Alignment

Rajdeep Haldar, Lantao Mei, Guang Lin, Yue Xing, Qifan Song
arXiv: 2602.05946v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

论文提出基于f散度的通用LLM对齐算法,在可验证奖励的强化学习和偏好对齐任务上表现出色。

主要贡献

  • 提出了f-GRPO和f-HAL两种新的对齐算法
  • 将偏好对齐视为分布散度的估计
  • 提供了算法的理论保证

方法论

基于f散度的变分表示,提出on-policy和混合on/off-policy的强化学习目标,优化LLM对齐。

原文摘要

Recent research shows that Preference Alignment (PA) objectives act as divergence estimators between aligned (chosen) and unaligned (rejected) response distributions. In this work, we extend this divergence-based perspective to general alignment settings, such as reinforcement learning with verifiable rewards (RLVR), where only environmental rewards are available. Within this unified framework, we propose $f$-Group Relative Policy Optimization ($f$-GRPO), a class of on-policy reinforcement learning, and $f$-Hybrid Alignment Loss ($f$-HAL), a hybrid on/off policy objectives, for general LLM alignment based on variational representation of $f$-divergences. We provide theoretical guarantees that these classes of objectives improve the average reward after alignment. Empirically, we validate our framework on both RLVR (Math Reasoning) and PA tasks (Safety Alignment), demonstrating superior performance and flexibility compared to current methods.

标签

LLM Alignment Reinforcement Learning f-divergence Preference Alignment

arXiv 分类

cs.LG stat.ML