Multimodal Learning 相关度: 9/10

When and Where to Attack? Stage-wise Attention-Guided Adversarial Attack on Large Vision Language Models

Jaehyun Kwak, Nam Cao, Boryeong Cho, Segyu Lee, Sumyeong Ahn, Se-Young Yun
arXiv: 2602.04356v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

SAGA是一种基于注意力机制的对抗攻击方法,能高效攻击大型视觉语言模型。

主要贡献

  • 发现区域注意力得分与对抗损失敏感性正相关
  • 提出Stage-wise Attention-Guided Attack (SAGA)框架
  • SAGA能高效利用有限扰动预算,生成高质量对抗样本

方法论

利用注意力机制指导扰动,逐步将扰动集中在高注意力区域,提高攻击效率。

原文摘要

Adversarial attacks against Large Vision-Language Models (LVLMs) are crucial for exposing safety vulnerabilities in modern multimodal systems. Recent attacks based on input transformations, such as random cropping, suggest that spatially localized perturbations can be more effective than global image manipulation. However, randomly cropping the entire image is inherently stochastic and fails to use the limited per-pixel perturbation budget efficiently. We make two key observations: (i) regional attention scores are positively correlated with adversarial loss sensitivity, and (ii) attacking high-attention regions induces a structured redistribution of attention toward subsequent salient regions. Based on these findings, we propose Stage-wise Attention-Guided Attack (SAGA), an attention-guided framework that progressively concentrates perturbations on high-attention regions. SAGA enables more efficient use of constrained perturbation budgets, producing highly imperceptible adversarial examples while consistently achieving state-of-the-art attack success rates across ten LVLMs. The source code is available at https://github.com/jackwaky/SAGA.

标签

对抗攻击 视觉语言模型 注意力机制 多模态

arXiv 分类

cs.CV