Multimodal Learning 相关度: 9/10

MoRL: Reinforced Reasoning for Unified Motion Understanding and Generation

Hongpeng Wang, Zeyu Zhang, Wenhao Li, Hao Tang
arXiv: 2602.14534v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

MoRL通过强化学习和链式运动推理,统一运动理解与生成,显著提升逻辑推理和感知真实性。

主要贡献

  • 提出了基于可验证奖励的强化学习统一多模态运动模型MoRL
  • 引入了链式运动(CoM)推理方法,增强推理能力
  • 构建了大规模链式思考数据集MoUnd-CoT-140K和MoGen-CoT-140K

方法论

使用监督微调和强化学习训练多模态运动模型,设计任务特定奖励,并采用链式运动推理进行测试时规划。

原文摘要

Human motion understanding and generation are crucial for vision and robotics but remain limited in reasoning capability and test-time planning. We propose MoRL, a unified multimodal motion model trained with supervised fine-tuning and reinforcement learning with verifiable rewards. Our task-specific reward design combines semantic alignment and reasoning coherence for understanding with physical plausibility and text-motion consistency for generation, improving both logical reasoning and perceptual realism. To further enhance inference, we introduce Chain-of-Motion (CoM), a test-time reasoning method that enables step-by-step planning and reflection. We also construct two large-scale CoT datasets, MoUnd-CoT-140K and MoGen-CoT-140K, to align motion sequences with reasoning traces and action descriptions. Experiments on HumanML3D and KIT-ML show that MoRL achieves significant gains over state-of-the-art baselines. Code: https://github.com/AIGeeksGroup/MoRL. Website: https://aigeeksgroup.github.io/MoRL.

标签

motion understanding motion generation reinforcement learning chain-of-thought multimodal

arXiv 分类

cs.CV