The LLMbda Calculus: AI Agents, Conversations, and Information Flow
AI 摘要
论文提出λ演算的扩展LLMbda,用于形式化推理LLM驱动的AI Agent的安全。
主要贡献
- 提出了LLMbda演算,形式化AI Agent的交互
- 引入信息流控制,保障Agent安全性
- 证明了非干扰定理,为安全Agent编程提供理论基础
方法论
构建了一个基于λ演算的语言,包含LLM调用原语,并定义了语义,用于分析prompt注入攻击及防御。
原文摘要
A conversation with a large language model (LLM) is a sequence of prompts and responses, with each response generated from the preceding conversation. AI agents build such conversations automatically: given an initial human prompt, a planner loop interleaves LLM calls with tool invocations and code execution. This tight coupling creates a new and poorly understood attack surface. A malicious prompt injected into a conversation can compromise later reasoning, trigger dangerous tool calls, or distort final outputs. Despite the centrality of such systems, we currently lack a principled semantic foundation for reasoning about their behaviour and safety. We address this gap by introducing an untyped call-by-value lambda calculus enriched with dynamic information-flow control and a small number of primitives for constructing prompt-response conversations. Our language includes a primitive that invokes an LLM: it serializes a value, sends it to the model as a prompt, and parses the response as a new term. This calculus faithfully represents planner loops and their vulnerabilities, including the mechanisms by which prompt injection alters subsequent computation. The semantics explicitly captures conversations, and so supports reasoning about defenses such as quarantined sub-conversations, isolation of generated code, and information-flow restrictions on what may influence an LLM call. A termination-insensitive noninterference theorem establishes integrity and confidentiality guarantees, demonstrating that a formal calculus can provide rigorous foundations for safe agentic programming.