Directional Embedding Smoothing for Robust Vision Language Models
AI 摘要
该论文扩展了RESTA防御,通过方向性嵌入平滑,增强了视觉语言模型抵抗越狱攻击的鲁棒性。
主要贡献
- 将RESTA防御扩展到VLMs
- 提出方向性嵌入噪声,提升防御效果
- 在JailBreakV-28K基准上验证了RESTA的有效性
方法论
通过在嵌入层注入与原始token嵌入向量对齐的方向性噪声,平滑嵌入空间,从而降低攻击成功率。
原文摘要
The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain vulnerable to jailbreaking attacks that undermine their safety alignment to yield harmful outputs. In this work, we extend the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense to VLMs and evaluate its performance against the JailBreakV-28K benchmark of multi-modal jailbreaking attacks. We find that RESTA is effective in reducing attack success rate over this diverse corpus of attacks, in particular, when employing directional embedding noise, where the injected noise is aligned with the original token embedding vectors. Our results demonstrate that RESTA can contribute to securing VLMs within agentic systems, as a lightweight, inference-time defense layer of an overall security framework.