GIPO: Gaussian Importance Sampling Policy Optimization
AI 摘要
GIPO提出一种基于重要性采样的策略优化方法,提升强化学习的样本效率和稳定性。
主要贡献
- 提出GIPO算法,使用高斯权重软化重要性比例
- 理论分析证明GIPO的约束性和鲁棒性
- 实验证明GIPO在多种场景下优于基线算法
方法论
GIPO通过截断重要性采样,并用基于对数比率的高斯权重代替硬裁剪,实现策略优化。
原文摘要
Post-training with reinforcement learning (RL) has recently shown strong promise for advancing multimodal agents beyond supervised imitation. However, RL remains limited by poor data efficiency, particularly in settings where interaction data are scarce and quickly become outdated. To address this challenge, GIPO (Gaussian Importance sampling Policy Optimization) is proposed as a policy optimization objective based on truncated importance sampling, replacing hard clipping with a log-ratio-based Gaussian trust weight to softly damp extreme importance ratios while maintaining non-zero gradients. Theoretical analysis shows that GIPO introduces an implicit, tunable constraint on the update magnitude, while concentration bounds guarantee robustness and stability under finite-sample estimation. Experimental results show that GIPO achieves state-of-the-art performance among clipping-based baselines across a wide range of replay buffer sizes, from near on-policy to highly stale data, while exhibiting superior bias--variance trade-off, high training stability and improved sample efficiency.