Agent Tuning & Optimization 相关度: 9/10

Autoregressive Direct Preference Optimization

Masanari Oi, Mahiro Ukai, Masahiro Kaneko, Naoaki Okazaki, Nakamasa Inoue
arXiv: 2602.09533v1 发布: 2026-02-10 更新: 2026-02-10

AI 摘要

论文提出Autoregressive DPO (ADPO),一种将自回归建模显式集成到偏好优化框架的新方法。

主要贡献

  • 提出了ADPO,一种新的DPO变体
  • 将自回归假设提前引入DPO的理论框架
  • 区分了token长度和反馈长度两种长度度量

方法论

通过重新审视DPO的理论基础,并在应用BT模型之前显式引入自回归假设,推导出ADPO损失函数。

原文摘要

Direct preference optimization (DPO) has emerged as a promising approach for aligning large language models (LLMs) with human preferences. However, the widespread reliance on the response-level Bradley-Terry (BT) model may limit its full potential, as the reference and learnable models are assumed to be autoregressive only after deriving the objective function. Motivated by this limitation, we revisit the theoretical foundations of DPO and propose a novel formulation that explicitly introduces the autoregressive assumption prior to applying the BT model. By reformulating and extending DPO, we derive a novel variant, termed Autoregressive DPO (ADPO), that explicitly integrates autoregressive modeling into the preference optimization framework. Without violating the theoretical foundations, the derived loss takes an elegant form: it shifts the summation operation in the DPO objective outside the log-sigmoid function. Furthermore, through theoretical analysis of ADPO, we show that there exist two length measures to be considered when designing DPO-based algorithms: the token length $μ$ and the feedback length $μ$'. To the best of our knowledge, we are the first to explicitly distinguish these two measures and analyze their implications for preference optimization in LLMs.

标签

DPO Preference Optimization Autoregressive Modeling Large Language Models

arXiv 分类

cs.AI