AI Agents 相关度: 9/10

Understanding Agent Scaling in LLM-Based Multi-Agent Systems via Diversity

Yingxuan Yang, Chengrui Qu, Muning Wen, Laixi Shi, Ying Wen, Weinan Zhang, Adam Wierman, Shangding Gu
arXiv: 2602.03794v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

论文研究了LLM多智能体系统中智能体数量与性能的关系,强调了多样性的重要性。

主要贡献

  • 提出了多智能体系统性能受限于任务不确定性的信息论框架
  • 推导了架构无关的性能边界,强调有效通道数量的重要性
  • 引入了$K^*$指标量化有效通道数量,无需ground-truth标签

方法论

通过信息论建模分析多智能体系统性能瓶颈,并进行实验验证异构智能体的优越性。

原文摘要

LLM-based multi-agent systems (MAS) have emerged as a promising approach to tackle complex tasks that are difficult for individual LLMs. A natural strategy is to scale performance by increasing the number of agents; however, we find that such scaling exhibits strong diminishing returns in homogeneous settings, while introducing heterogeneity (e.g., different models, prompts, or tools) continues to yield substantial gains. This raises a fundamental question: what limits scaling, and why does diversity help? We present an information-theoretic framework showing that MAS performance is bounded by the intrinsic task uncertainty, not by agent count. We derive architecture-agnostic bounds demonstrating that improvements depend on how many effective channels the system accesses. Homogeneous agents saturate early because their outputs are strongly correlated, whereas heterogeneous agents contribute complementary evidence. We further introduce $K^*$, an effective channel count that quantifies the number of effective channels without ground-truth labels. Empirically, we show that heterogeneous configurations consistently outperform homogeneous scaling: 2 diverse agents can match or exceed the performance of 16 homogeneous agents. Our results provide principled guidelines for building efficient and robust MAS through diversity-aware design. Code and Dataset are available at the link: https://github.com/SafeRL-Lab/Agent-Scaling.

标签

LLM Multi-Agent System Diversity Information Theory

arXiv 分类

cs.AI cs.LG