AI Agents 相关度: 7/10

Convex Markov Games and Beyond: New Proof of Existence, Characterization and Learning Algorithms for Nash Equilibria

Anas Barakat, Ioannis Panageas, Antonios Varvitsiotis
arXiv: 2602.12181v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

论文扩展了凸马尔可夫博弈,提出了广义效用马尔可夫博弈,并提供了纳什均衡的存在性证明和学习算法。

主要贡献

  • 证明了广义效用马尔可夫博弈中纳什均衡与不动点的关系
  • 提出了基于策略梯度的学习算法
  • 为势博弈提供了迭代复杂度和样本复杂度保证

方法论

利用不动点定理证明均衡存在性,通过梯度优势性质推导策略梯度定理,设计模型无关的策略梯度算法。

原文摘要

Convex Markov Games (cMGs) were recently introduced as a broad class of multi-agent learning problems that generalize Markov games to settings where strategic agents optimize general utilities beyond additive rewards. While cMGs expand the modeling frontier, their theoretical foundations, particularly the structure of Nash equilibria (NE) and guarantees for learning algorithms, are not yet well understood. In this work, we address these gaps for an extension of cMGs, which we term General Utility Markov Games (GUMGs), capturing new applications requiring coupling between agents' occupancy measures. We prove that in GUMGs, Nash equilibria coincide with the fixed points of projected pseudo-gradient dynamics (i.e., first-order stationary points), enabled by a novel agent-wise gradient domination property. This insight also yields a simple proof of NE existence using Brouwer's fixed-point theorem. We further show the existence of Markov perfect equilibria. Building on this characterization, we establish a policy gradient theorem for GUMGs and design a model-free policy gradient algorithm. For potential GUMGs, we establish iteration complexity guarantees for computing approximate-NE under exact gradients and provide sample complexity bounds in both the generative model and on-policy settings. Our results extend beyond prior work restricted to zero-sum cMGs, providing the first theoretical analysis of common-interest cMGs.

标签

马尔可夫博弈 纳什均衡 多智能体学习

arXiv 分类

cs.GT cs.LG cs.MA