AI Agents 相关度: 8/10

Learn for Variation: Variationally Guided AAV Trajectory Learning in Differentiable Environments

Xiucheng Wang, Zhenye Chen, Nan Cheng
arXiv: 2603.18853v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

提出L4V框架,利用可微环境和梯度传播解决AAV轨迹规划中的信用分配和训练不稳定性问题。

主要贡献

  • 提出了L4V框架,采用梯度引导轨迹学习
  • 使用可微计算图和反向传播计算精确梯度
  • 在实际应用场景中验证了L4V的有效性

方法论

构建AAV运动学、信道增益和数据收集进度的可微计算图,通过反向传播获得精确梯度,训练确定性神经网络策略。

原文摘要

Autonomous aerial vehicles (AAVs) empower sixth-generation (6G) Internet-of-Things (IoT) networks through mobility-driven data collection. However, conventional reward-driven reinforcement learning for AAV trajectory planning suffers from severe credit assignment issues and training instability, because sparse scalar rewards fail to capture the long-term and nonlinear effects of sequential movements. To address these challenges, this paper proposes Learn for Variation (L4V), a gradient-informed trajectory learning framework that replaces high-variance scalar reward signals with dense and analytically grounded policy gradients. Particularly, the coupled evolution of AAV kinematics, distance-dependent channel gains, and per-user data-collection progress is first unrolled into an end-to-end differentiable computational graph. Backpropagation through time then serves as a discrete adjoint solver, which propagates exact sensitivities from the cumulative mission objective to every control action and policy parameter. These structured gradients are used to train a deterministic neural policy with temporal smoothness regularization and gradient clipping. Extensive simulations demonstrate that L4V consistently outperforms representative baselines, including a genetic algorithm, DQN, A2C, and DDPG, in mission completion time, average transmission rate, and training cost

标签

AAV trajectory planning reinforcement learning differentiable programming policy gradient

arXiv 分类

eess.SY cs.LG