AI Agents 相关度: 10/10

AgentSentry: Mitigating Indirect Prompt Injection in LLM Agents via Temporal Causal Diagnostics and Context Purification

Tian Zhang, Yiwei Xu, Juan Wang, Keyan Guo, Xiaoyang Xu, Bowen Xiao, Quanlong Guan, Jinlin Fan, Jiawei Liu, Zhiquan Liu, Hongxin Hu
arXiv: 2602.22724v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

AgentSentry通过因果诊断和上下文净化,有效缓解LLM Agent中的间接提示注入攻击。

主要贡献

  • 提出AgentSentry,一种针对工具增强LLM Agent的推理时检测和缓解框架。
  • 将多轮IPI建模为时间因果接管,通过反事实重执行定位接管点。
  • 通过因果引导的上下文净化,在保留任务相关证据的同时,消除攻击引起的偏差。

方法论

AgentSentry通过反事实重执行和因果引导的上下文净化来检测和消除IPI攻击,保障Agent安全。

原文摘要

Large language model (LLM) agents increasingly rely on external tools and retrieval systems to autonomously complete complex tasks. However, this design exposes agents to indirect prompt injection (IPI), where attacker-controlled context embedded in tool outputs or retrieved content silently steers agent actions away from user intent. Unlike prompt-based attacks, IPI unfolds over multi-turn trajectories, making malicious control difficult to disentangle from legitimate task execution. Existing inference-time defenses primarily rely on heuristic detection and conservative blocking of high-risk actions, which can prematurely terminate workflows or broadly suppress tool usage under ambiguous multi-turn scenarios. We propose AgentSentry, a novel inference-time detection and mitigation framework for tool-augmented LLM agents. To the best of our knowledge, AgentSentry is the first inference-time defense to model multi-turn IPI as a temporal causal takeover. It localizes takeover points via controlled counterfactual re-executions at tool-return boundaries and enables safe continuation through causally guided context purification that removes attack-induced deviations while preserving task-relevant evidence. We evaluate AgentSentry on the \textsc{AgentDojo} benchmark across four task suites, three IPI attack families, and multiple black-box LLMs. AgentSentry eliminates successful attacks and maintains strong utility under attack, achieving an average Utility Under Attack (UA) of 74.55 %, improving UA by 20.8 to 33.6 percentage points over the strongest baselines without degrading benign performance.

标签

LLM Agents Prompt Injection Causal Inference Security

arXiv 分类

cs.CR cs.AI