RL-VLA$^3$: Reinforcement Learning VLA Accelerating via Full Asynchronism
AI 摘要
提出RL-VLA$^3$框架,通过全异步策略加速VLA模型的强化学习训练,提升训练效率。
主要贡献
- 提出了完全异步的VLA模型强化学习训练框架。
- 设计了多级解耦架构,包括异步并行环境交互、流式策略生成和解耦训练更新。
- 实验验证了框架在VLA模型和环境中的有效性和可扩展性。
方法论
通过异步并行化环境交互、流式策略生成和解耦训练更新,实现VLA模型强化学习训练的全异步化,提高吞吐量。
原文摘要
In recent years, Vision-Language-Action (VLA) models have emerged as a crucial pathway towards general embodied intelligence, yet their training efficiency has become a key bottleneck. Although existing reinforcement learning (RL)-based training frameworks like RLinf can enhance model generalization, they still rely on synchronous execution, leading to severe resource underutilization and throughput limitations during environment interaction, policy generation (rollout), and model update phases (actor). To overcome this challenge, this paper, for the first time, proposes and implements a fully-asynchronous policy training framework encompassing the entire pipeline from environment interaction, rollout generation, to actor policy updates. Systematically drawing inspiration from asynchronous optimization ideas in large model RL, our framework designs a multi-level decoupled architecture. This includes asynchronous parallelization of environment interaction and trajectory collection, streaming execution for policy generation, and decoupled scheduling for training updates. We validated the effectiveness of our method across diverse VLA models and environments. On the LIBERO benchmark, the framework achieves throughput improvements of up to 59.25\% compared to existing synchronous strategies. When deeply optimizing separation strategies, throughput can be increased by as much as 126.67\%. We verified the effectiveness of each asynchronous component via ablation studies. Scaling law validation across 8 to 256 GPUs demonstrates our method's excellent scalability under most conditions.