LLM Reasoning 相关度: 9/10

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training

Anas Barakat, Souradip Chakraborty, Khushbu Pahwa, Amrit Singh Bedi
arXiv: 2602.21189v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

研究表明Pass@k优化可能导致Pass@1性能下降,揭示了prompt干扰导致的梯度冲突。

主要贡献

  • 理论分析Pass@k优化降低Pass@1的原因
  • 发现prompt干扰导致的梯度冲突
  • 实验验证了在数学推理任务中的理论发现

方法论

通过理论推导分析Pass@k梯度与Pass@1梯度的冲突,并使用LLM在数学推理任务上进行实验验证。

原文摘要

Pass@k is a widely used performance metric for verifiable large language model tasks, including mathematical reasoning, code generation, and short-answer reasoning. It defines success if any of $k$ independently sampled solutions passes a verifier. This multi-sample inference metric has motivated inference-aware fine-tuning methods that directly optimize pass@$k$. However, prior work reports a recurring trade-off: pass@k improves while pass@1 degrades under such methods. This trade-off is practically important because pass@1 often remains a hard operational constraint due to latency and cost budgets, imperfect verifier coverage, and the need for a reliable single-shot fallback. We study the origin of this trade-off and provide a theoretical characterization of when pass@k policy optimization can reduce pass@1 through gradient conflict induced by prompt interference. We show that pass@$k$ policy gradients can conflict with pass@1 gradients because pass@$k$ optimization implicitly reweights prompts toward low-success prompts; when these prompts are what we term negatively interfering, their upweighting can rotate the pass@k update direction away from the pass@1 direction. We illustrate our theoretical findings with large language model experiments on verifiable mathematical reasoning tasks.

标签

LLM Pass@k Pass@1 Prompt干扰 梯度冲突

arXiv 分类

cs.LG cs.AI