Efficient Unsupervised Environment Design through Hierarchical Policy Representation Learning
AI 摘要
提出一种分层MDP框架,通过学生策略表征学习高效无监督环境设计,减少师生交互。
主要贡献
- 提出分层MDP框架进行环境设计
- 利用学生策略表征指导环境生成
- 引入生成模型增强教师训练数据,减少师生交互
方法论
构建分层MDP,教师Agent利用学生策略表征生成训练环境,并用生成模型扩充教师数据。
原文摘要
Unsupervised Environment Design (UED) has emerged as a promising approach to developing general-purpose agents through automated curriculum generation. Popular UED methods focus on Open-Endedness, where teacher algorithms rely on stochastic processes for infinite generation of useful environments. This assumption becomes impractical in resource-constrained scenarios where teacher-student interaction opportunities are limited. To address this challenge, we introduce a hierarchical Markov Decision Process (MDP) framework for environment design. Our framework features a teacher agent that leverages student policy representations derived from discovered evaluation environments, enabling it to generate training environments based on the student's capabilities. To improve efficiency, we incorporate a generative model that augments the teacher's training dataset with synthetic data, reducing the need for teacher-student interactions. In experiments across several domains, we show that our method outperforms baseline approaches while requiring fewer teacher-student interactions in a single episode. The results suggest the applicability of our approach in settings where training opportunities are limited.