Agent Tuning & Optimization 相关度: 7/10

Knowledge Divergence and the Value of Debate for Scalable Oversight

Robin Young
arXiv: 2603.05293v1 发布: 2026-03-05 更新: 2026-03-05

AI 摘要

论文分析了辩论在可扩展监督高级AI系统中的价值,并用知识差异的几何结构来量化辩论优势。

主要贡献

  • 建立了辩论与RLAIF的正式联系
  • 提出了理解对抗监督协议合理性的几何基础
  • 发现了知识差异的不同阶段,并分析了辩论的有效性

方法论

论文使用模型表示子空间的主角度来参数化辩论的价值,并进行理论证明和分类分析。

原文摘要

AI safety via debate and reinforcement learning from AI feedback (RLAIF) are both proposed methods for scalable oversight of advanced AI systems, yet no formal framework relates them or characterizes when debate offers an advantage. We analyze this by parameterizing debate's value through the geometry of knowledge divergence between debating models. Using principal angles between models' representation subspaces, we prove that the debate advantage admits an exact closed form. When models share identical training corpora, debate reduces to RLAIF-like where a single-agent method recovers the same optimum. When models possess divergent knowledge, debate advantage scales with a phase transition from quadratic regime (debate offers negligible benefit) to linear regime (debate is essential). We classify three regimes of knowledge divergence (shared, one-sided, and compositional) and provide existence results showing that debate can achieve outcomes inaccessible to either model alone, alongside a negative result showing that sufficiently strong adversarial incentives cause coordination failure in the compositional regime, with a sharp threshold separating effective from ineffective debate. We offer the first formal connection between debate and RLAIF, a geometric foundation for understanding when adversarial oversight protocols are justified, and connection to the problem of eliciting latent knowledge across models with complementary information.

标签

AI safety Debate RLAIF Knowledge Divergence

arXiv 分类

cs.LG cs.CL