Multimodal Learning 相关度: 9/10

Hoi3DGen: Generating High-Quality Human-Object-Interactions in 3D

Agniv Sharma, Xianghui Xie, Tom Fischer, Eddy Ilg, Gerard Pons-Moll
arXiv: 2603.12126v1 发布: 2026-03-12 更新: 2026-03-12

AI 摘要

Hoi3DGen通过高质量交互数据和文本到3D流程,显著提升了3D人机交互生成的质量和保真度。

主要贡献

  • 构建了高质量的3D人机交互数据集
  • 提出了一个完整的文本到3D的生成框架
  • 大幅提升了交互保真度和3D模型质量

方法论

利用多模态大语言模型,生成高质量的交互数据,并建立文本到3D的生成流程,优化模型。

原文摘要

Modeling and generating 3D human-object interactions from text is crucial for applications in AR, XR, and gaming. Existing approaches often rely on score distillation from text-to-image models, but their results suffer from the Janus problem and do not follow text prompts faithfully due to the scarcity of high-quality interaction data. We introduce Hoi3DGen, a framework that generates high-quality textured meshes of human-object interaction that follow the input interaction descriptions precisely. We first curate realistic and high-quality interaction data leveraging multimodal large language models, and then create a full text-to-3D pipeline, which achieves orders-of-magnitude improvements in interaction fidelity. Our method surpasses baselines by 4-15x in text consistency and 3-7x in 3D model quality, exhibiting strong generalization to diverse categories and interaction types, while maintaining high-quality 3D generation.

标签

3D generation Human-Object Interaction Multimodal learning

arXiv 分类

cs.CV cs.LG