Multimodal Learning 相关度: 9/10

Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

Qishun Yang, Shu Yang, Lijie Hu, Di Wang
arXiv: 2603.08486v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

提出Visual Self-Fulfilling Alignment,通过威胁图像训练VLM提升安全性,无需安全标签。

主要贡献

  • 提出了一种新的VLM安全对齐方法VSFA。
  • VSFA利用威胁相关图像进行无标签训练,提升VLM的安全性。
  • 实验证明VSFA能有效降低攻击成功率,提升回复质量。

方法论

基于Self-Fulfilling机制,构建基于威胁图像的VQA任务,微调VLM,使模型内化安全意识。

原文摘要

Multimodal large language models (MLLMs) face safety misalignment, where visual inputs enable harmful outputs. To address this, existing methods require explicit safety labels or contrastive data; yet, threat-related concepts are concrete and visually depictable, while safety concepts, like helpfulness, are abstract and lack visual referents. Inspired by the Self-Fulfilling mechanism underlying emergent misalignment, we propose Visual Self-Fulfilling Alignment (VSFA). VSFA fine-tunes vision-language models (VLMs) on neutral VQA tasks constructed around threat-related images, without any safety labels. Through repeated exposure to threat-related visual content, models internalize the implicit semantics of vigilance and caution, shaping safety-oriented personas. Experiments across multiple VLMs and safety benchmarks demonstrate that VSFA reduces the attack success rate, improves response quality, and mitigates over-refusal while preserving general capabilities. Our work extends the self-fulfilling mechanism from text to visual modalities, offering a label-free approach to VLMs alignment.

标签

VLM 安全对齐 威胁图像 无标签学习

arXiv 分类

cs.CV cs.AI