Multimodal Learning 相关度: 8/10

VideoAfford: Grounding 3D Affordance from Human-Object-Interaction Videos via Multimodal Large Language Model

Hanqing Wang, Mingyu Liu, Xiaoyu Chen, Chengwei MA, Yiming Zhong, Wenti Yin, Yuhao Liu, Zhiqing Cui, Jiahao Yuan, Lu Dai, Zhiyuan Ma, Hui Xiong
arXiv: 2602.09638v1 发布: 2026-02-10 更新: 2026-02-10

AI 摘要

该论文提出VideoAfford,利用多模态大语言模型进行视频中3D可供性的学习和推理。

主要贡献

  • 构建了视频-3D交互可供性数据集VIDA
  • 提出了基于多模态大语言模型的VideoAfford模型
  • 引入了空间感知损失函数

方法论

利用潜在动作编码器从视频中提取动态交互先验,结合多模态大语言模型,实现3D可供性分割和推理。

原文摘要

3D affordance grounding aims to highlight the actionable regions on 3D objects, which is crucial for robotic manipulation. Previous research primarily focused on learning affordance knowledge from static cues such as language and images, which struggle to provide sufficient dynamic interaction context that can reveal temporal and causal cues. To alleviate this predicament, we collect a comprehensive video-based 3D affordance dataset, \textit{VIDA}, which contains 38K human-object-interaction videos covering 16 affordance types, 38 object categories, and 22K point clouds. Based on \textit{VIDA}, we propose a strong baseline: VideoAfford, which activates multimodal large language models with additional affordance segmentation capabilities, enabling both world knowledge reasoning and fine-grained affordance grounding within a unified framework. To enhance action understanding capability, we leverage a latent action encoder to extract dynamic interaction priors from HOI videos. Moreover, we introduce a \textit{spatial-aware} loss function to enable VideoAfford to obtain comprehensive 3D spatial knowledge. Extensive experimental evaluations demonstrate that our model significantly outperforms well-established methods and exhibits strong open-world generalization with affordance reasoning abilities. All datasets and code will be publicly released to advance research in this area.

标签

3D affordance Multimodal Learning Video Understanding Robotics

arXiv 分类

cs.CV