AI Agents 相关度: 9/10

Agent-Sentry: Bounding LLM Agents via Execution Provenance

Rohan Sequeira, Stavros Damianakis, Umar Iqbal, Konstantinos Psounis
arXiv: 2603.22868v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

Agent-Sentry通过执行溯源限制LLM Agent行为,防御越界攻击,保障系统安全和用户意图。

主要贡献

  • 提出Agent-Sentry框架,限制Agent功能范围。
  • 通过学习Agent行为轨迹构建行为边界。
  • 防御越界执行攻击,同时保持系统效用。

方法论

Agent-Sentry通过挖掘Agent的常用功能和执行轨迹,学习策略,阻止偏离行为或与用户意图不符的工具调用。

原文摘要

Agentic computing systems, which autonomously spawn new functionalities based on natural language instructions, are becoming increasingly prevalent. While immensely capable, these systems raise serious security, privacy, and safety concerns. Fundamentally, the full set of functionalities offered by these systems, combined with their probabilistic execution flows, is not known beforehand. Given this lack of characterization, it is non-trivial to validate whether a system has successfully carried out the user's intended task or instead executed irrelevant actions, potentially as a consequence of compromise. In this paper, we propose Agent-Sentry, a framework that attempts to bound agentic systems to address this problem. Our key insight is that agentic systems are designed for specific use cases and therefore need not expose unbounded or unspecified functionalities. Once bounded, these systems become easier to scrutinize. Agent-Sentry operationalizes this insight by uncovering frequent functionalities offered by an agentic system, along with their execution traces, to construct behavioral bounds. It then learns a policy from these traces and blocks tool calls that deviate from learned behaviors or that misalign with user intent. Our evaluation shows that Agent-Sentry helps prevent over 90\% of attacks that attempt to trigger out-of-bounds executions, while preserving up to 98\% of system utility.

标签

AI Agents Security LLM Execution Provenance Behavioral Bounding

arXiv 分类

cs.CR cs.AI