Zombie Agents: Persistent Control of Self-Evolving LLM Agents via Self-Reinforcing Injections
AI 摘要
研究通过注入攻击长期控制自进化LLM Agent,使其执行未经授权的任务。
主要贡献
- 提出Zombie Agent攻击,一种针对自进化LLM Agent的持久控制攻击
- 设计了黑盒攻击框架,通过间接暴露方式注入恶意payload
- 针对不同内存实现,设计了抗截断和相关性过滤的持久化策略
方法论
通过控制Web内容,诱导Agent读取并存储恶意payload,利用触发机制激活payload,实现长期控制。
原文摘要
Self-evolving LLM agents update their internal state across sessions, often by writing and reusing long-term memory. This design improves performance on long-horizon tasks but creates a security risk: untrusted external content observed during a benign session can be stored as memory and later treated as instruction. We study this risk and formalize a persistent attack we call a Zombie Agent, where an attacker covertly implants a payload that survives across sessions, effectively turning the agent into a puppet of the attacker. We present a black-box attack framework that uses only indirect exposure through attacker-controlled web content. The attack has two phases. During infection, the agent reads a poisoned source while completing a benign task and writes the payload into long-term memory through its normal update process. During trigger, the payload is retrieved or carried forward and causes unauthorized tool behavior. We design mechanism-specific persistence strategies for common memory implementations, including sliding-window and retrieval-augmented memory, to resist truncation and relevance filtering. We evaluate the attack on representative agent setups and tasks, measuring both persistence over time and the ability to induce unauthorized actions while preserving benign task quality. Our results show that memory evolution can convert one-time indirect injection into persistent compromise, which suggests that defenses focused only on per-session prompt filtering are not sufficient for self-evolving agents.