AI Agents 相关度: 7/10

IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning

Yihao Qin, Yuanfei Wang, Hang Zhou, Peiran Liu, Hao Dong, Yiding Ji
arXiv: 2603.04289v1 发布: 2026-03-04 更新: 2026-03-04

AI 摘要

IPD通过离线规划蒸馏提升序列策略,解决离线RL中数据质量和规划不足的问题。

主要贡献

  • 提出了Imaginary Planning Distillation (IPD)框架
  • 使用世界模型和MPC生成想象的优化轨迹
  • 使用价值引导的目标函数进行策略蒸馏

方法论

IPD学习世界模型和价值函数,通过MPC生成优化轨迹,然后用蒸馏目标在Transformer上训练序列策略。

原文摘要

Decision transformer based sequential policies have emerged as a powerful paradigm in offline reinforcement learning (RL), yet their efficacy remains constrained by the quality of static datasets and inherent architectural limitations. Specifically, these models often struggle to effectively integrate suboptimal experiences and fail to explicitly plan for an optimal policy. To bridge this gap, we propose \textbf{Imaginary Planning Distillation (IPD)}, a novel framework that seamlessly incorporates offline planning into data generation, supervised training, and online inference. Our framework first learns a world model equipped with uncertainty measures and a quasi-optimal value function from the offline data. These components are utilized to identify suboptimal trajectories and augment them with reliable, imagined optimal rollouts generated via Model Predictive Control (MPC). A Transformer-based sequential policy is then trained on this enriched dataset, complemented by a value-guided objective that promotes the distillation of the optimal policy. By replacing the conventional, manually-tuned return-to-go with the learned quasi-optimal value function, IPD improves both decision-making stability and performance during inference. Empirical evaluations on the D4RL benchmark demonstrate that IPD significantly outperforms several state-of-the-art value-based and transformer-based offline RL methods across diverse tasks.

标签

offline reinforcement learning decision transformer model predictive control

arXiv 分类

cs.LG cs.AI