Multimodal Learning 相关度: 9/10

Unleashing Vision-Language Semantics for Deepfake Video Detection

Jiawen Zhu, Yunqi Miao, Xueyi Zhang, Jiankang Deng, Guansong Pang
arXiv: 2603.24454v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

VLAForge利用视觉-语言语义增强深度伪造视频检测的判别能力,优于现有方法。

主要贡献

  • 提出VLAForge框架,融合视觉和语言语义
  • 设计ForgePerceiver,增强视觉感知,保留VLA知识
  • 引入Identity-Aware VLA score,结合身份先验文本提示

方法论

通过ForgePerceiver捕获伪造线索,并结合身份感知的VLA得分,增强模型对深度伪造的判别能力。

原文摘要

Recent Deepfake Video Detection (DFD) studies have demonstrated that pre-trained Vision-Language Models (VLMs) such as CLIP exhibit strong generalization capabilities in detecting artifacts across different identities. However, existing approaches focus on leveraging visual features only, overlooking their most distinctive strength -- the rich vision-language semantics embedded in the latent space. We propose VLAForge, a novel DFD framework that unleashes the potential of such cross-modal semantics to enhance model's discriminability in deepfake detection. This work i) enhances the visual perception of VLM through a ForgePerceiver, which acts as an independent learner to capture diverse, subtle forgery cues both granularly and holistically, while preserving the pretrained Vision-Language Alignment (VLA) knowledge, and ii) provides a complementary discriminative cue -- Identity-Aware VLA score, derived by coupling cross-modal semantics with the forgery cues learned by ForgePerceiver. Notably, the VLA score is augmented by an identity prior-informed text prompting to capture authenticity cues tailored to each identity, thereby enabling more discriminative cross-modal semantics. Comprehensive experiments on video DFD benchmarks, including classical face-swapping forgeries and recent full-face generation forgeries, demonstrate that our VLAForge substantially outperforms state-of-the-art methods at both frame and video levels. Code is available at https://github.com/mala-lab/VLAForge.

标签

深度伪造检测 视觉-语言模型 跨模态学习 CLIP

arXiv 分类

cs.CV