EA-Swin: An Embedding-Agnostic Swin Transformer for AI-Generated Video Detection
AI 摘要
EA-Swin利用嵌入无关的Swin Transformer有效检测AI生成视频,并提出了新的EA-Video数据集。
主要贡献
- 提出EA-Swin模型,一种嵌入无关的Swin Transformer
- 构建EA-Video数据集,包含多样化的AI生成视频
- EA-Swin在检测AI生成视频方面显著优于现有方法
方法论
采用嵌入无关的Swin Transformer,通过分解窗口注意力机制建模时空依赖关系,适用于通用ViT风格的编码器。
原文摘要
Recent advances in foundation video generators such as Sora2, Veo3, and other commercial systems have produced highly realistic synthetic videos, exposing the limitations of existing detection methods that rely on shallow embedding trajectories, image-based adaptation, or computationally heavy MLLMs. We propose EA-Swin, an Embedding-Agnostic Swin Transformer that models spatiotemporal dependencies directly on pretrained video embeddings via a factorized windowed attention design, making it compatible with generic ViT-style patch-based encoders. Alongside the model, we construct the EA-Video dataset, a benchmark dataset comprising 130K videos that integrates newly collected samples with curated existing datasets, covering diverse commercial and open-source generators and including unseen-generator splits for rigorous cross-distribution evaluation. Extensive experiments show that EA-Swin achieves 0.97-0.99 accuracy across major generators, outperforming prior SoTA methods (typically 0.8-0.9) by a margin of 5-20%, while maintaining strong generalization to unseen distributions, establishing a scalable and robust solution for modern AI-generated video detection.