AI Agents 相关度: 6/10

Recover to Predict: Progressive Retrospective Learning for Variable-Length Trajectory Prediction

Hao Zhou, Lu Qi, Jason Li, Jie Zhang, Yi Liu, Xu Yang, Mingyu Fan, Fei Luo
arXiv: 2603.10597v1 发布: 2026-03-11 更新: 2026-03-11

AI 摘要

针对变长轨迹预测问题,提出渐进式回顾框架PRF,提升短轨迹预测准确率。

主要贡献

  • 提出渐进式回顾框架PRF,逐步对齐不完整观测的特征
  • 设计回顾蒸馏模块RDM和回顾预测模块RPM
  • 提出滚动启动训练策略RSTS,提高数据效率

方法论

通过级联的回顾单元,逐步将不完整观测的特征与完整观测的特征对齐,进行回顾式学习。

原文摘要

Trajectory prediction is critical for autonomous driving, enabling safe and efficient planning in dense, dynamic traffic. Most existing methods optimize prediction accuracy under fixed-length observations. However, real-world driving often yields variable-length, incomplete observations, posing a challenge to these methods. A common strategy is to directly map features from incomplete observations to those from complete ones. This one-shot mapping, however, struggles to learn accurate representations for short trajectories due to significant information gaps. To address this issue, we propose a Progressive Retrospective Framework (PRF), which gradually aligns features from incomplete observations with those from complete ones via a cascade of retrospective units. Each unit consists of a Retrospective Distillation Module (RDM) and a Retrospective Prediction Module (RPM), where RDM distills features and RPM recovers previous timesteps using the distilled features. Moreover, we propose a Rolling-Start Training Strategy (RSTS) that enhances data efficiency during PRF training. PRF is plug-and-play with existing methods. Extensive experiments on datasets Argoverse 2 and Argoverse 1 demonstrate the effectiveness of PRF. Code is available at https://github.com/zhouhao94/PRF.

标签

轨迹预测 自动驾驶 变长观测 回顾学习

arXiv 分类

cs.RO cs.AI