Multimodal Learning 相关度: 9/10

X-AVDT: Audio-Visual Cross-Attention for Robust Deepfake Detection

Youngseo Kim, Kwan Yun, Seokhyeon Hong, Sihun Cha, Colette Suhjung Koo, Junyong Noh
arXiv: 2603.08483v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

提出X-AVDT,利用生成器内部音视频一致性线索,提高深度伪造检测的鲁棒性和泛化性。

主要贡献

  • 提出X-AVDT检测器,利用音视频交叉注意力特征进行深度伪造检测
  • 提出MMDF数据集,包含多种生成模型的深度伪造数据
  • 实验证明X-AVDT在鲁棒性和泛化性方面优于现有方法

方法论

通过DDIM逆向,提取视频不一致性信息和音视频交叉注意力特征,用于深度伪造检测。

原文摘要

The surge of highly realistic synthetic videos produced by contemporary generative systems has significantly increased the risk of malicious use, challenging both humans and existing detectors. Against this backdrop, we take a generator-side view and observe that internal cross-attention mechanisms in these models encode fine-grained speech-motion alignment, offering useful correspondence cues for forgery detection. Building on this insight, we propose X-AVDT, a robust and generalizable deepfake detector that probes generator-internal audio-visual signals accessed via DDIM inversion to expose these cues. X-AVDT extracts two complementary signals: (i) a video composite capturing inversion-induced discrepancies, and (ii) an audio-visual cross-attention feature reflecting modality alignment enforced during generation. To enable faithful cross-generator evaluation, we further introduce MMDF, a new multimodal deepfake dataset spanning diverse manipulation types and rapidly evolving synthesis paradigms, including GANs, diffusion, and flow-matching. Extensive experiments demonstrate that X-AVDT achieves leading performance on MMDF and generalizes strongly to external benchmarks and unseen generators, outperforming existing methods with accuracy improved by 13.1%. Our findings highlight the importance of leveraging internal audio-visual consistency cues for robustness to future generators in deepfake detection.

标签

Deepfake Detection Audio-Visual Learning Cross-Attention Generative Models DDIM Inversion

arXiv 分类

cs.CV cs.AI cs.LG