Multimodal Learning 相关度: 9/10

Pushing the Frontier of Black-Box LVLM Attacks via Fine-Grained Detail Targeting

Xiaohan Zhao, Zhaoyi Li, Yaxin Luo, Jiacheng Cui, Zhiqiang Shen
arXiv: 2602.17645v1 发布: 2026-02-19 更新: 2026-02-19

AI 摘要

该论文提出了M-Attack-V2,通过精细化细节攻击显著提升了黑盒LVLM对抗攻击的成功率。

主要贡献

  • 提出了Multi-Crop Alignment (MCA)降低梯度方差
  • 提出了Auxiliary Target Alignment (ATA)构建平滑目标流形
  • 提出了Patch Momentum和patch-size ensemble增强迁移方向

方法论

通过改进局部匹配策略,利用MCA降低源图像梯度方差,ATA构建平滑目标流形,并结合Patch Momentum,提升黑盒对抗攻击的迁移性。

原文摘要

Black-box adversarial attacks on Large Vision-Language Models (LVLMs) are challenging due to missing gradients and complex multimodal boundaries. While prior state-of-the-art transfer-based approaches like M-Attack perform well using local crop-level matching between source and target images, we find this induces high-variance, nearly orthogonal gradients across iterations, violating coherent local alignment and destabilizing optimization. We attribute this to (i) ViT translation sensitivity that yields spike-like gradients and (ii) structural asymmetry between source and target crops. We reformulate local matching as an asymmetric expectation over source transformations and target semantics, and build a gradient-denoising upgrade to M-Attack. On the source side, Multi-Crop Alignment (MCA) averages gradients from multiple independently sampled local views per iteration to reduce variance. On the target side, Auxiliary Target Alignment (ATA) replaces aggressive target augmentation with a small auxiliary set from a semantically correlated distribution, producing a smoother, lower-variance target manifold. We further reinterpret momentum as Patch Momentum, replaying historical crop gradients; combined with a refined patch-size ensemble (PE+), this strengthens transferable directions. Together these modules form M-Attack-V2, a simple, modular enhancement over M-Attack that substantially improves transfer-based black-box attacks on frontier LVLMs: boosting success rates on Claude-4.0 from 8% to 30%, Gemini-2.5-Pro from 83% to 97%, and GPT-5 from 98% to 100%, outperforming prior black-box LVLM attacks. Code and data are publicly available at: https://github.com/vila-lab/M-Attack-V2.

标签

LVLM 对抗攻击 黑盒攻击 迁移学习

arXiv 分类

cs.LG cs.AI cs.CL cs.CV