Multimodal Learning 相关度: 9/10

DRMOT: A Dataset and Framework for RGBD Referring Multi-Object Tracking

Sijia Chen, Lijuan Ma, Yanqiu Yu, En Yu, Liman Liu, Wenbing Tao
arXiv: 2602.04692v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

提出RGBD指代多目标跟踪任务,构建DRSet数据集,提出DRTrack框架。

主要贡献

  • 提出RGBD指代多目标跟踪任务(DRMOT)
  • 构建用于DRMOT的DRSet数据集
  • 提出MLLM引导的深度指代跟踪框架DRTrack

方法论

提出DRTrack框架,利用MLLM进行深度感知的目标定位,并融合深度信息进行轨迹关联。

原文摘要

Referring Multi-Object Tracking (RMOT) aims to track specific targets based on language descriptions and is vital for interactive AI systems such as robotics and autonomous driving. However, existing RMOT models rely solely on 2D RGB data, making it challenging to accurately detect and associate targets characterized by complex spatial semantics (e.g., ``the person closest to the camera'') and to maintain reliable identities under severe occlusion, due to the absence of explicit 3D spatial information. In this work, we propose a novel task, RGBD Referring Multi-Object Tracking (DRMOT), which explicitly requires models to fuse RGB, Depth (D), and Language (L) modalities to achieve 3D-aware tracking. To advance research on the DRMOT task, we construct a tailored RGBD referring multi-object tracking dataset, named DRSet, designed to evaluate models' spatial-semantic grounding and tracking capabilities. Specifically, DRSet contains RGB images and depth maps from 187 scenes, along with 240 language descriptions, among which 56 descriptions incorporate depth-related information. Furthermore, we propose DRTrack, a MLLM-guided depth-referring tracking framework. DRTrack performs depth-aware target grounding from joint RGB-D-L inputs and enforces robust trajectory association by incorporating depth cues. Extensive experiments on the DRSet dataset demonstrate the effectiveness of our framework.

标签

RGBD Referring Multi-Object Tracking Multimodal Learning Depth Perception

arXiv 分类

cs.CV cs.AI