Multimodal Learning 相关度: 9/10

UniSAFE: A Comprehensive Benchmark for Safety Evaluation of Unified Multimodal Models

Segyu Lee, Boryeong Cho, Hojung Jung, Seokhyun An, Juhyeong Kim, Jaehyun Kwak, Yongjin Yang, Sangwon Jang, Youngrok Park, Wonjun Chang, Se-Young Yun
arXiv: 2603.17476v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

UniSAFE是一个评估统一多模态模型安全性的综合基准,揭示了现有模型在多模态情境下的安全漏洞。

主要贡献

  • 提出了首个针对统一多模态模型的系统级安全基准UniSAFE
  • 构建了包含6802个实例,覆盖7种模态组合的测试数据集
  • 评估了15个最先进的统一多模态模型,发现安全漏洞

方法论

通过共享目标设计,将常见的风险场景投影到不同的I/O配置上,进行跨任务的安全性对比评估。

原文摘要

Unified Multimodal Models (UMMs) offer powerful cross-modality capabilities but introduce new safety risks not observed in single-task models. Despite their emergence, existing safety benchmarks remain fragmented across tasks and modalities, limiting the comprehensive evaluation of complex system-level vulnerabilities. To address this gap, we introduce UniSAFE, the first comprehensive benchmark for system-level safety evaluation of UMMs across 7 I/O modality combinations, spanning conventional tasks and novel multimodal-context image generation settings. UniSAFE is built with a shared-target design that projects common risk scenarios across task-specific I/O configurations, enabling controlled cross-task comparisons of safety failures. Comprising 6,802 curated instances, we use UniSAFE to evaluate 15 state-of-the-art UMMs, both proprietary and open-source. Our results reveal critical vulnerabilities across current UMMs, including elevated safety violations in multi-image composition and multi-turn settings, with image-output tasks consistently more vulnerable than text-output tasks. These findings highlight the need for stronger system-level safety alignment for UMMs. Our code and data are publicly available at https://github.com/segyulee/UniSAFE

标签

多模态学习 安全性评估 统一模型 基准测试

arXiv 分类

cs.CV cs.AI cs.CL