Multimodal Learning 相关度: 9/10

TeHOR: Text-Guided 3D Human and Object Reconstruction with Textures

Hyeongjin Nam, Daniel Sungho Jung, Kyoung Mu Lee
arXiv: 2602.19679v1 发布: 2026-02-23 更新: 2026-02-23

AI 摘要

提出TeHOR框架,利用文本和外观信息指导3D人体和物体联合重建,提升语义一致性和视觉逼真度。

主要贡献

  • 引入文本描述以实现非接触人-物交互的重建
  • 融入外观信息以获取全局上下文,提升重建质量
  • 提出TeHOR框架,达到state-of-the-art性能

方法论

利用文本描述和3D人体/物体的外观信息,强制执行语义对齐,从而实现更准确和语义连贯的重建。

原文摘要

Joint reconstruction of 3D human and object from a single image is an active research area, with pivotal applications in robotics and digital content creation. Despite recent advances, existing approaches suffer from two fundamental limitations. First, their reconstructions rely heavily on physical contact information, which inherently cannot capture non-contact human-object interactions, such as gazing at or pointing toward an object. Second, the reconstruction process is primarily driven by local geometric proximity, neglecting the human and object appearances that provide global context crucial for understanding holistic interactions. To address these issues, we introduce TeHOR, a framework built upon two core designs. First, beyond contact information, our framework leverages text descriptions of human-object interactions to enforce semantic alignment between the 3D reconstruction and its textual cues, enabling reasoning over a wider spectrum of interactions, including non-contact cases. Second, we incorporate appearance cues of the 3D human and object into the alignment process to capture holistic contextual information, thereby ensuring visually plausible reconstructions. As a result, our framework produces accurate and semantically coherent reconstructions, achieving state-of-the-art performance.

标签

3D Reconstruction Human-Object Interaction Text-Guided Reconstruction Multimodal Learning

arXiv 分类

cs.CV cs.AI