Multimodal Learning 相关度: 9/10

OOD-MMSafe: Advancing MLLM Safety from Harmful Intent to Hidden Consequences

Ming Wen, Kun Yang, Jingyu Zhang, Yuxuan Liu, shiwen cui, Shouling Ji, Xingjun Ma
arXiv: 2603.09706v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

提出了OOD-MMSafe基准测试MLLM在因果链中识别潜在风险的能力,并提出CASPO框架提升模型安全性。

主要贡献

  • 提出了OOD-MMSafe基准测试
  • 揭示了MLLM模型在识别潜在风险方面的因果盲视
  • 提出了CASPO框架,利用自蒸馏提升安全性

方法论

构建OOD-MMSafe数据集评估模型识别潜在风险能力,使用CASPO框架,通过token-level自蒸馏奖励优化模型安全策略。

原文摘要

While safety alignment for Multimodal Large Language Models (MLLMs) has gained significant attention, current paradigms primarily target malicious intent or situational violations. We propose shifting the safety frontier toward consequence-driven safety, a paradigm essential for the robust deployment of autonomous and embodied agents. To formalize this shift, we introduce OOD-MMSafe, a benchmark comprising 455 curated query-image pairs designed to evaluate a model's ability to identify latent hazards within context-dependent causal chains. Our analysis reveals a pervasive causal blindness among frontier models, with the highest 67.5% failure rate in high-capacity closed-source models, and identifies a preference ceiling where static alignment yields format-centric failures rather than improved safety reasoning as model capacity grows. To address these bottlenecks, we develop the Consequence-Aware Safety Policy Optimization (CASPO) framework, which integrates the model's intrinsic reasoning as a dynamic reference for token-level self-distillation rewards. Experimental results demonstrate that CASPO significantly enhances consequence projection, reducing the failure ratio of risk identification to 7.3% for Qwen2.5-VL-7B and 5.7% for Qwen3-VL-4B while maintaining overall effectiveness.

标签

MLLM Safety Causal Reasoning Benchmark Self-Distillation

arXiv 分类

cs.AI