Hierarchical Lead Critic based Multi-Agent Reinforcement Learning
AI 摘要
提出了一种基于分层领导者批评的多智能体强化学习方法,提升了协作任务的性能和鲁棒性。
主要贡献
- 提出分层领导者批评(HLC)架构
- 引入多层次的局部和全局视角学习机制
- 验证了HLC在协作MARL任务中的有效性和可扩展性
方法论
通过序列训练和分层架构,在不同层级学习不同视角,结合高层目标和低层执行。
原文摘要
Cooperative Multi-Agent Reinforcement Learning (MARL) solves complex tasks that require coordination from multiple agents, but is often limited to either local (independent learning) or global (centralized learning) perspectives. In this paper, we introduce a novel sequential training scheme and MARL architecture, which learns from multiple perspectives on different hierarchy levels. We propose the Hierarchical Lead Critic (HLC) - inspired by natural emerging distributions in team structures, where following high-level objectives combines with low-level execution. HLC demonstrates that introducing multiple hierarchies, leveraging local and global perspectives, can lead to improved performance with high sample efficiency and robust policies. Experimental results conducted on cooperative, non-communicative, and partially observable MARL benchmarks demonstrate that HLC outperforms single hierarchy baselines and scales robustly with increasing amounts of agents and difficulty.