Mining Instance-Centric Vision-Language Contexts for Human-Object Interaction Detection
AI 摘要
提出InCoM-Net,结合视觉语言模型和目标检测器,提升人-物交互检测性能。
主要贡献
- 提出Instance-centric Context Mining Network (InCoM-Net)
- 设计Instance-centric Context Refinement (ICR) 模块
- 设计Progressive Context Aggregation (ProCA) 模块
方法论
InCoM-Net利用VLMs提取语义知识,结合目标检测器特征,通过ICR提取多粒度上下文信息,ProCA进行融合,最终实现HOI检测。
原文摘要
Human-Object Interaction (HOI) detection aims to localize human-object pairs and classify their interactions from a single image, a task that demands strong visual understanding and nuanced contextual reasoning. Recent approaches have leveraged Vision-Language Models (VLMs) to introduce semantic priors, significantly improving HOI detection performance. However, existing methods often fail to fully capitalize on the diverse contextual cues distributed across the entire scene. To overcome these limitations, we propose the Instance-centric Context Mining Network (InCoM-Net)-a novel framework that effectively integrates rich semantic knowledge extracted from VLMs with instance-specific features produced by an object detector. This design enables deeper interaction reasoning by modeling relationships not only within each detected instance but also across instances and their surrounding scene context. InCoM-Net comprises two core components: Instancecentric Context Refinement (ICR), which separately extracts intra-instance, inter-instance, and global contextual cues from VLM-derived features, and Progressive Context Aggregation (ProCA), which iteratively fuses these multicontext features with instance-level detector features to support high-level HOI reasoning. Extensive experiments on the HICO-DET and V-COCO benchmarks show that InCoM-Net achieves state-of-the-art performance, surpassing previous HOI detection methods. Code is available at https://github.com/nowuss/InCoM-Net.