Multimodal Learning 相关度: 9/10

GridVAD: Open-Set Video Anomaly Detection via Spatial Reasoning over Stratified Frame Grids

Mohamed Eltahir, Ahmed O. Ibrahim, Obada Siralkhatim, Tabarak Abdallah, Sondos Mohamed
arXiv: 2603.25467v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

GridVAD提出了一种基于视觉-语言模型的无训练视频异常检测方法,利用空间推理生成像素级异常掩码。

主要贡献

  • 提出了GridVAD框架,一个无需训练的视频异常检测流程
  • 利用视觉-语言模型生成异常提议
  • 通过自洽性巩固(SCC)过滤幻觉
  • 在多个数据集上取得了领先的性能

方法论

GridVAD通过对分层网格表示的视频片段进行VLM推理,生成异常提议,并通过SCC过滤幻觉,最后通过Grounding DINO和SAM2生成像素级异常掩码。

原文摘要

Vision-Language Models (VLMs) are powerful open-set reasoners, yet their direct use as anomaly detectors in video surveillance is fragile: without calibrated anomaly priors, they alternate between missed detections and hallucinated false alarms. We argue the problem is not the VLM itself but how it is used. VLMs should function as anomaly proposers, generating open-set candidate descriptions that are then grounded and tracked by purpose-built spatial and temporal modules. We instantiate this propose-ground-propagate principle in GridVAD, a training-free pipeline that produces pixel-level anomaly masks without any domain-specific training. A VLM reasons over stratified grid representations of video clips to generate natural-language anomaly proposals. Self-Consistency Consolidation (SCC) filters hallucinations by retaining only proposals that recur across multiple independent samplings. Grounding DINO anchors each surviving proposal to a bounding box, and SAM2 propagates it as a dense mask through the anomaly interval. The per-clip VLM budget is fixed at M+1 calls regardless of video length, where M can be set according to the proposals needed. On UCSD Ped2, GridVAD achieves the highest Pixel-AUROC (77.59) among all compared methods, surpassing even the partially fine-tuned TAO (75.11) and outperforms other zero-shot approaches on object-level RBDC by over 5x. Ablations reveal that SCC provides a controllable precision-recall tradeoff: filtering improves all pixel level metrics at a modest cost in object-level recall. Efficiency experiments show GridVAD is 2.7x more call-efficient than uniform per-frame VLM querying while additionally producing dense segmentation masks.Code and qualitative video results are available at https://gridvad.github.io.

标签

视频异常检测 视觉-语言模型 零样本学习 空间推理

arXiv 分类

cs.CV