Increasing intelligence in AI agents can worsen collective outcomes
AI 摘要
AI智能体的复杂性提升可能恶化群体行为,资源稀缺时尤其明显。
主要贡献
- 研究了AI智能体群体行为的四个关键变量:天性、培养、文化和资源稀缺性。
- 揭示了资源稀缺时,AI模型多样性和强化学习会增加系统过载的风险。
- 提出了通过容量-群体比率来预测AI智能体群体行为的方法。
方法论
通过经验分析和数学建模,研究了不同配置下AI智能体群体的动态行为。
原文摘要
When resources are scarce, will a population of AI agents coordinate in harmony, or descend into tribal chaos? Diverse decision-making AI from different developers is entering everyday devices -- from phones and medical devices to battlefield drones and cars -- and these AI agents typically compete for finite shared resources such as charging slots, relay bandwidth, and traffic priority. Yet their collective dynamics and hence risks to users and society are poorly understood. Here we study AI-agent populations as the first system of real agents in which four key variables governing collective behaviour can be independently toggled: nature (innate LLM diversity), nurture (individual reinforcement learning), culture (emergent tribe formation), and resource scarcity. We show empirically and mathematically that when resources are scarce, AI model diversity and reinforcement learning increase dangerous system overload, though tribe formation lessens this risk. Meanwhile, some individuals profit handsomely. When resources are abundant, the same ingredients drive overload to near zero, though tribe formation makes the overload slightly worse. The crossover is arithmetical: it is where opposing tribes that form spontaneously first fit inside the available capacity. More sophisticated AI-agent populations are not better: whether their sophistication helps or harms depends entirely on a single number -- the capacity-to-population ratio -- that is knowable before any AI-agent ships.