AI Agents 相关度: 9/10

MaMa: A Game-Theoretic Approach for Designing Safe Agentic Systems

Jonathan Nöther, Adish Singla, Goran Radanovic
arXiv: 2602.04431v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

MaMa算法通过博弈论设计安全自主系统,防御对抗攻击,提升LLM多智能体系统的安全性。

主要贡献

  • 提出MaMa算法,用于自动设计安全自主系统
  • 将系统安全问题建模为Stackelberg安全博弈
  • 实验证明MaMa设计的系统具有鲁棒性和泛化能力

方法论

使用基于LLM的对抗搜索,Meta-Agent迭代提出系统设计,Meta-Adversary寻找最强攻击并反馈。

原文摘要

LLM-based multi-agent systems have demonstrated impressive capabilities, but they also introduce significant safety risks when individual agents fail or behave adversarially. In this work, we study the automated design of agentic systems that remain safe even when a subset of agents is compromised. We formalize this challenge as a Stackelberg security game between a system designer (the Meta-Agent) and a best-responding Meta-Adversary that selects and compromises a subset of agents to minimize safety. We propose Meta-Adversary-Meta-Agent (MaMa), a novel algorithm for approximately solving this game and automatically designing safe agentic systems. Our approach uses LLM-based adversarial search, where the Meta-Agent iteratively proposes system designs and receives feedback based on the strongest attacks discovered by the Meta-Adversary. Empirical evaluations across diverse environments show that systems designed with MaMa consistently defend against worst-case attacks while maintaining performance comparable to systems optimized solely for task success. Moreover, the resulting systems generalize to stronger adversaries, as well as ones with different attack objectives or underlying LLMs, demonstrating robust safety beyond the training setting.

标签

AI Agents 安全 博弈论 LLM

arXiv 分类

cs.LG cs.GT