Multimodal Learning 相关度: 8/10

Beyond Semantic Priors: Mitigating Optimization Collapse for Generalizable Visual Forensics

Jipeng Liu, Haichao Shi, Siyu Xing, Rong Yin, Xiao-Yu Zhang
arXiv: 2603.24057v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

针对深度伪造检测中优化崩溃问题,提出CoRIT模型,提升跨域泛化能力。

主要贡献

  • 提出Critical Optimization Radius (COR) 和 Gradient Signal-to-Noise Ratio (GSNR)用于理论分析
  • 发现Layer-wise GSNR衰减是优化崩溃的根源
  • 提出Contrastive Regional Injection Transformer (CoRIT)模型

方法论

理论分析优化崩溃现象,提出对比区域注入Transformer(CoRIT)模型,通过对比梯度代理和区域精炼等策略提高泛化性。

原文摘要

While Vision-Language Models (VLMs) like CLIP have emerged as a dominant paradigm for generalizable deepfake detection, a representational disconnect remains: their semantic-centric pre-training is ill-suited for capturing non-semantic artifacts inherent to hyper-realistic synthesis. In this work, we identify a failure mode termed Optimization Collapse, where detectors trained with Sharpness-Aware Minimization (SAM) degenerate to random guessing on non-semantic forgeries once the perturbation radius exceeds a narrow threshold. To theoretically formalize this collapse, we propose the Critical Optimization Radius (COR) to quantify the geometric stability of the optimization landscape, and leverage the Gradient Signal-to-Noise Ratio (GSNR) to measure generalization potential. We establish a theorem proving that COR increases monotonically with GSNR, thereby revealing that the geometric instability of SAM optimization originates from degraded intrinsic generalization potential. This result identifies the layer-wise attenuation of GSNR as the root cause of Optimization Collapse in detecting non-semantic forgeries. Although naively reducing perturbation radius yields stable convergence under SAM, it merely treats the symptom without mitigating the intrinsic generalization degradation, necessitating enhanced gradient fidelity. Building on this insight, we propose the Contrastive Regional Injection Transformer (CoRIT), which integrates a computationally efficient Contrastive Gradient Proxy (CGP) with three training-free strategies: Region Refinement Mask to suppress CGP variance, Regional Signal Injection to preserve CGP magnitude, and Hierarchical Representation Integration to attain more generalizable representations. Extensive experiments demonstrate that CoRIT mitigates optimization collapse and achieves state-of-the-art generalization across cross-domain and universal forgery benchmarks.

标签

深度伪造检测 泛化性 对比学习 视觉Transformer

arXiv 分类

cs.CV