Agentic AI as a Cybersecurity Attack Surface: Threats, Exploits, and Defenses in Runtime Supply Chains
AI 摘要
该论文分析了基于LLM的Agent系统在运行时供应链中的网络安全风险,并提出了零信任运行时架构。
主要贡献
- 系统化了Agent运行时框架中的威胁,包括数据和工具供应链攻击
- 识别了病毒代理循环(Viral Agent Loop)
- 提出了零信任运行时架构以应对安全风险
方法论
论文采用威胁建模方法,分析了Agent系统在运行时环境中的攻击面,并提出了防御措施。
原文摘要
Agentic systems built on large language models (LLMs) extend beyond text generation to autonomously retrieve information and invoke tools. This runtime execution model shifts the attack surface from build-time artifacts to inference-time dependencies, exposing agents to manipulation through untrusted data and probabilistic capability resolution. While prior work has focused on model-level vulnerabilities, security risks emerging from cyclic and interdependent runtime behavior remain fragmented. We systematize these risks within a unified runtime framework, categorizing threats into data supply chain attacks (transient context injection and persistent memory poisoning) and tool supply chain attacks (discovery, implementation, and invocation). We further identify the Viral Agent Loop, in which agents act as vectors for self-propagating generative worms without exploiting code-level flaws. Finally, we advocate a Zero-Trust Runtime Architecture that treats context as untrusted control flow and constrains tool execution through cryptographic provenance rather than semantic inference.