AI Agents 相关度: 9/10

Training Multi-Turn Search Agent via Contrastive Dynamic Branch Sampling

Yubao Zhao, Weiquan Huang, Sudong Wang, Ruochen Zhao, Chen Chen, Yao Shu, Chengwei Qin
arXiv: 2602.03719v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

BranPO通过对比动态分支抽样优化多轮搜索Agent,提升长程任务性能。

主要贡献

  • 提出了Branching Relative Policy Optimization (BranPO)方法
  • 引入难度感知分支抽样和冗余步骤屏蔽
  • 在多种问答benchmark上验证了BranPO的有效性

方法论

通过截断尾部轨迹并重采样构建对比后缀,在共享前缀上进行对比监督,减少长程rollout的信用分配模糊性。

原文摘要

Agentic reinforcement learning has enabled large language models to perform complex multi-turn planning and tool use. However, learning in long-horizon settings remains challenging due to sparse, trajectory-level outcome rewards. While prior tree-based methods attempt to mitigate this issue, they often suffer from high variance and computational inefficiency. Through empirical analysis of search agents, We identify a common pattern: performance diverges mainly due to decisions near the tail. Motivated by this observation, we propose Branching Relative Policy Optimization (BranPO), a value-free method that provides step-level contrastive supervision without dense rewards. BranPO truncates trajectories near the tail and resamples alternative continuations to construct contrastive suffixes over shared prefixes, reducing credit ambiguity in long-horizon rollouts. To further boost efficiency and stabilize training, we introduce difficulty-aware branch sampling to adapt branching frequency across tasks, and redundant step masking to suppress uninformative actions. Extensive experiments on various question answering benchmarks demonstrate that BranPO consistently outperforms strong baselines, achieving significant accuracy gains on long-horizon tasks without increasing the overall training budget. Our code is available at \href{https://github.com/YubaoZhao/BranPO}{code}.

标签

Agent Reinforcement Learning Contrastive Learning Multi-turn Planning

arXiv 分类

cs.CL