AI Agents 相关度: 8/10

Per-Domain Generalizing Policies: On Learning Efficient and Robust Q-Value Functions (Extended Version with Technical Appendix)

Nicola J. Müller, Moritz Oster, Isabel Valera, Jörg Hoffmann, Timo P. Gros
arXiv: 2603.17544v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

提出正则化的Q值学习方法,提升跨领域规划策略的效率和鲁棒性。

主要贡献

  • 提出基于Q值函数的规划策略学习方法
  • 使用正则化项区分采取和未采取的动作
  • 在多个领域验证了性能优于状态值函数

方法论

使用图神经网络学习Q值函数,并通过正则化项约束学习过程,以提高策略的泛化能力。

原文摘要

Learning per-domain generalizing policies is a key challenge in learning for planning. Standard approaches learn state-value functions represented as graph neural networks using supervised learning on optimal plans generated by a teacher planner. In this work, we advocate for learning Q-value functions instead. Such policies are drastically cheaper to evaluate for a given state, as they need to process only the current state rather than every successor. Surprisingly, vanilla supervised learning of Q-values performs poorly as it does not learn to distinguish between the actions taken and those not taken by the teacher. We address this by using regularization terms that enforce this distinction, resulting in Q-value policies that consistently outperform state-value policies across a range of 10 domains and are competitive with the planner LAMA-first.

标签

规划 Q值函数 泛化 图神经网络 正则化

arXiv 分类

cs.AI cs.LG