AI Agents 相关度: 8/10

Efficient Morphology-Control Co-Design via Stackelberg Proximal Policy Optimization

Yanning Dai, Yuhui Wang, Dylan R. Ashley, Jürgen Schmidhuber
arXiv: 2603.15388v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

提出Stackelberg PPO算法,解决形态控制协同设计中控制动态适应问题,提升学习效率。

主要贡献

  • 提出了Stackelberg PPO算法,将形态和控制之间的内在耦合建模为Stackelberg博弈。
  • 将控制的自适应动力学明确纳入形态优化,从而稳定训练并提高学习效率。
  • 在多个协同设计任务中,验证了Stackelberg PPO优于标准PPO。

方法论

将形态控制协同设计问题建模为Stackelberg博弈,使用近端策略优化(PPO)方法进行求解。

原文摘要

Morphology-control co-design concerns the coupled optimization of an agent's body structure and control policy. This problem exhibits a bi-level structure, where the control dynamically adapts to the morphology to maximize performance. Existing methods typically neglect the control's adaptation dynamics by adopting a single-level formulation that treats the control policy as fixed when optimizing morphology. This can lead to inefficient optimization, as morphology updates may be misaligned with control adaptation. In this paper, we revisit the co-design problem from a game-theoretic perspective, modeling the intrinsic coupling between morphology and control as a novel variant of a Stackelberg game. We propose Stackelberg Proximal Policy Optimization (Stackelberg PPO), which explicitly incorporates the control's adaptation dynamics into morphology optimization. By modeling this intrinsic coupling, our method aligns morphology updates with control adaptation, thereby stabilizing training and improving learning efficiency. Experiments across diverse co-design tasks demonstrate that Stackelberg PPO outperforms standard PPO in both stability and final performance, opening the way for dramatically more efficient robotics designs.

标签

形态控制协同设计 强化学习 Stackelberg博弈 PPO

arXiv 分类

cs.LG cs.AI cs.RO stat.ML