AI Agents 相关度: 9/10

Fluid-Agent Reinforcement Learning

Shishir Sharma, Doina Precup, Theodore J. Perkins
arXiv: 2602.14559v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

提出了一种允许智能体创建其他智能体的流体智能体强化学习框架。

主要贡献

  • 提出了流体智能体环境
  • 提出了流体智能体博弈的博弈论解概念
  • 在流体环境中评估了多种 MARL 算法的性能

方法论

提出了一个新框架,并在 Predator-Prey 和 Level-Based Foraging 等基准环境中,动态生成智能体进行实验。

原文摘要

The primary focus of multi-agent reinforcement learning (MARL) has been to study interactions among a fixed number of agents embedded in an environment. However, in the real world, the number of agents is neither fixed nor known a priori. Moreover, an agent can decide to create other agents (for example, a cell may divide, or a company may spin off a division). In this paper, we propose a framework that allows agents to create other agents; we call this a fluid-agent environment. We present game-theoretic solution concepts for fluid-agent games and empirically evaluate the performance of several MARL algorithms within this framework. Our experiments include fluid variants of established benchmarks such as Predator-Prey and Level-Based Foraging, where agents can dynamically spawn, as well as a new environment we introduce that highlights how fluidity can unlock novel solution strategies beyond those observed in fixed-population settings. We demonstrate that this framework yields agent teams that adjust their size dynamically to match environmental demands.

标签

多智能体强化学习 流体智能体 博弈论

arXiv 分类

cs.LG cs.AI cs.MA