WideSeek-R1: Exploring Width Scaling for Broad Information Seeking via Multi-Agent Reinforcement Learning
AI 摘要
WideSeek-R1通过多智能体强化学习实现宽度缩放,提升LLM在广域信息检索任务中的性能。
主要贡献
- 提出WideSeek-R1框架,利用主代理-子代理架构进行广域信息检索
- 采用多智能体强化学习(MARL)训练,优化代理的协作与并行执行
- 验证了宽度缩放的有效性,在WideSearch基准测试中取得可比肩单一大模型的结果
方法论
使用MARL训练的主代理-子代理框架,通过隔离上下文和专用工具,并行执行广域信息检索任务。
原文摘要
Recent advancements in Large Language Models (LLMs) have largely focused on depth scaling, where a single agent solves long-horizon problems with multi-turn reasoning and tool use. However, as tasks grow broader, the key bottleneck shifts from individual competence to organizational capability. In this work, we explore a complementary dimension of width scaling with multi-agent systems to address broad information seeking. Existing multi-agent systems often rely on hand-crafted workflows and turn-taking interactions that fail to parallelize work effectively. To bridge this gap, we propose WideSeek-R1, a lead-agent-subagent framework trained via multi-agent reinforcement learning (MARL) to synergize scalable orchestration and parallel execution. By utilizing a shared LLM with isolated contexts and specialized tools, WideSeek-R1 jointly optimizes the lead agent and parallel subagents on a curated dataset of 20k broad information-seeking tasks. Extensive experiments show that WideSeek-R1-4B achieves an item F1 score of 40.0% on the WideSearch benchmark, which is comparable to the performance of single-agent DeepSeek-R1-671B. Furthermore, WideSeek-R1-4B exhibits consistent performance gains as the number of parallel subagents increases, highlighting the effectiveness of width scaling.