Large Language Model Guided Incentive Aware Reward Design for Cooperative Multi-Agent Reinforcement Learning
AI 摘要
提出了一种基于大语言模型的自动奖励函数设计框架,提升多智能体合作强化学习性能。
主要贡献
- 提出了基于LLM的自动奖励设计框架
- 在Overcooked-AI环境中验证了框架的有效性
- 分析了合成奖励成分对智能体行为的影响
方法论
利用LLM生成可执行的奖励程序,通过环境反馈和计算预算约束进行选择和优化。
原文摘要
Designing effective auxiliary rewards for cooperative multi-agent systems remains a precarious task; misaligned incentives risk inducing suboptimal coordination, especially where sparse task feedback fails to provide sufficient grounding. This study introduces an automated reward design framework that leverages large language models to synthesize executable reward programs from environment instrumentation. The procedure constrains candidate programs within a formal validity envelope and evaluates their efficacy by training policies from scratch under a fixed computational budget; selection depends exclusively on the sparse task return. The framework is evaluated across four distinct Overcooked-AI layouts characterized by varied corridor congestion, handoff dependencies, and structural asymmetries. Iterative search generations consistently yield superior task returns and delivery counts, with the most pronounced gains occurring in environments dominated by interaction bottlenecks. Diagnostic analysis of the synthesized shaping components indicates increased interdependence in action selection and improved signal alignment in coordination-intensive tasks. These results demonstrate that the search for objectivegrounded reward programs can mitigate the burden of manual engineering while producing shaping signals compatible with cooperative learning under finite budgets.