GEM: Guided Expectation-Maximization for Behavior-Normalized Candidate Action Selection in Offline RL
AI 摘要
GEM利用引导式EM算法和行为归一化支持,提升离线RL中的动作选择质量。
主要贡献
- 提出GEM框架,用于离线RL中多模态动作选择
- 利用优势加权EM更新训练GMM Actor
- 引入行为归一化支持的候选动作重排序方法
方法论
使用引导式EM算法训练GMM Actor,并通过行为归一化支持进行候选动作选择。
原文摘要
Offline reinforcement learning (RL) can fit strong value functions from fixed datasets, yet reliable deployment still hinges on the action selection interface used to query them. When the dataset induces a branched or multimodal action landscape, unimodal policy extraction can blur competing hypotheses and yield "in-between" actions that are weakly supported by data, making decisions brittle even with a strong critic. We introduce GEM (Guided Expectation-Maximization), an analytical framework that makes action selection both multimodal and explicitly controllable. GEM trains a Gaussian Mixture Model (GMM) actor via critic-guided, advantage-weighted EM-style updates that preserve distinct components while shifting probability mass toward high-value regions, and learns a tractable GMM behavior model to quantify support. During inference, GEM performs candidate-based selection: it generates a parallel candidate set and reranks actions using a conservative ensemble lower-confidence bound together with behavior-normalized support, where the behavior log-likelihood is standardized within each state's candidate set to yield stable, comparable control across states and candidate budgets. Empirically, GEM is competitive across D4RL benchmarks, and offers a simple inference-time budget knob (candidate count) that trades compute for decision quality without retraining.