AI Agents 相关度: 9/10

ICON: Indirect Prompt Injection Defense for Agents based on Inference-Time Correction

Che Wang, Fuyao Zhang, Jiaming Zhang, Ziqi Zhang, Yinghui Wang, Longtao Huang, Jianbo Gao, Zhong Chen, Wei Yang Bryan Lim
arXiv: 2602.20708v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

ICON通过探测并纠正LLM agent潜空间中的攻击特征,有效防御间接Prompt注入攻击,提升任务成功率。

主要贡献

  • 提出基于潜空间特征的间接prompt注入攻击检测方法
  • 设计了注意力导向的对抗样本修复机制
  • 在安全性和任务完成度之间实现了更好的平衡

方法论

通过潜空间trace prober检测攻击,然后利用mitigating rectifier选择性操纵注意力,恢复LLM的正确行为轨迹。

原文摘要

Large Language Model (LLM) agents are susceptible to Indirect Prompt Injection (IPI) attacks, where malicious instructions in retrieved content hijack the agent's execution. Existing defenses typically rely on strict filtering or refusal mechanisms, which suffer from a critical limitation: over-refusal, prematurely terminating valid agentic workflows. We propose ICON, a probing-to-mitigation framework that neutralizes attacks while preserving task continuity. Our key insight is that IPI attacks leave distinct over-focusing signatures in the latent space. We introduce a Latent Space Trace Prober to detect attacks based on high intensity scores. Subsequently, a Mitigating Rectifier performs surgical attention steering that selectively manipulate adversarial query key dependencies while amplifying task relevant elements to restore the LLM's functional trajectory. Extensive evaluations on multiple backbones show that ICON achieves a competitive 0.4% ASR, matching commercial grade detectors, while yielding a over 50% task utility gain. Furthermore, ICON demonstrates robust Out of Distribution(OOD) generalization and extends effectively to multi-modal agents, establishing a superior balance between security and efficiency.

标签

Indirect Prompt Injection LLM Agents Adversarial Defense Attention Steering Latent Space Analysis

arXiv 分类

cs.AI cs.CR