Multimodal Learning 相关度: 9/10

Exposing Cross-Modal Consistency for Fake News Detection in Short-Form Videos

Chong Tian, Yu Wang, Chenxu Yang, Junyi Guan, Zheng Lin, Yuhan Liu, Xiuying Chen, Qirong Ho
arXiv: 2603.14992v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

提出MAGIC3模型,通过建模跨模态一致性来检测短视频中的假新闻。

主要贡献

  • 提出MAGIC3模型,显式建模跨三模态一致性
  • 利用多风格LLM重写来获得风格鲁棒的文本表示
  • 采用不确定性感知分类器进行选择性VLM路由

方法论

MAGIC3结合显式成对和全局一致性建模,以及token-和frame-级别一致性信号,进行假新闻检测。

原文摘要

Short-form video platforms are major channels for news but also fertile ground for multimodal misinformation where each modality appears plausible alone yet cross-modal relationships are subtly inconsistent, like mismatched visuals and captions. On two benchmark datasets, FakeSV (Chinese) and FakeTT (English), we observe a clear asymmetry: real videos exhibit high text-visual but moderate text-audio consistency, while fake videos show the opposite pattern. Moreover, a single global consistency score forms an interpretable axis along which fake probability and prediction errors vary smoothly. Motivated by these observations, we present MAGIC3 (Modal-Adversarial Gated Interaction and Consistency-Centric Classifier), a detector that explicitly models and exposes cross-tri-modal consistency signals at multiple granularities. MAGIC3 combines explicit pairwise and global consistency modeling with token- and frame-level consistency signals derived from cross-modal attention, incorporates multi-style LLM rewrites to obtain style-robust text representations, and employs an uncertainty-aware classifier for selective VLM routing. Using pre-extracted features, MAGIC3 consistently outperforms the strongest non-VLM baselines on FakeSV and FakeTT. While matching VLM-level accuracy, the two-stage system achieves 18-27x higher throughput and 93% VRAM savings, offering a strong cost-performance tradeoff.

标签

假新闻检测 多模态学习 跨模态一致性 短视频

arXiv 分类

cs.AI cs.MM