AI Agents 相关度: 7/10

InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions

Sirui Xu, Samuel Schulter, Morteza Ziyadi, Xialin He, Xiaohan Fei, Yu-Xiong Wang, Liangyan Gui
arXiv: 2602.06035v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

InterPrior提出了一种可扩展的生成控制器,用于学习基于物理的人机交互,通过模仿学习和强化学习相结合。

主要贡献

  • 提出了InterPrior框架,用于学习人机交互的生成控制器
  • 通过大规模模仿学习和强化学习相结合,提升了控制器的泛化能力
  • 验证了该框架在用户交互控制和真实机器人部署中的潜力

方法论

首先进行模仿学习预训练,然后通过数据增强和强化学习进行微调,学习一个通用的、目标导向的变分策略。

原文摘要

Humans rarely plan whole-body interactions with objects at the level of explicit whole-body movements. High-level intentions, such as affordance, define the goal, while coordinated balance, contact, and manipulation can emerge naturally from underlying physical and motor priors. Scaling such priors is key to enabling humanoids to compose and generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination. To this end, we introduce InterPrior, a scalable framework that learns a unified generative controller through large-scale imitation pretraining and post-training by reinforcement learning. InterPrior first distills a full-reference imitation expert into a versatile, goal-conditioned variational policy that reconstructs motion from multimodal observations and high-level intent. While the distilled policy reconstructs training behaviors, it does not generalize reliably due to the vast configuration space of large-scale human-object interactions. To address this, we apply data augmentation with physical perturbations, and then perform reinforcement learning finetuning to improve competence on unseen goals and initializations. Together, these steps consolidate the reconstructed latent skills into a valid manifold, yielding a motion prior that generalizes beyond the training data, e.g., it can incorporate new behaviors such as interactions with unseen objects. We further demonstrate its effectiveness for user-interactive control and its potential for real robot deployment.

标签

机器人 人机交互 强化学习 模仿学习

arXiv 分类

cs.CV cs.GR cs.RO