Agent Tuning & Optimization 相关度: 8/10

ProRL Agent: Rollout-as-a-Service for RL Training of Multi-Turn LLM Agents

Hao Zhang, Mingjie Liu, Shaokun Zhang, Songyang Han, Jian Hu, Zhenghui Jin, Yuchi Zhang, Shizhe Diao, Ximing Lu, Binfeng Xu, Zhiding Yu, Jan Kautz, Yi Dong
arXiv: 2603.18815v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

ProRL Agent提出了一种基于Rollout-as-a-Service的LLM Agent RL训练框架,提升了可扩展性和易维护性。

主要贡献

  • 提出了Rollout-as-a-Service的LLM Agent训练框架
  • 设计了可扩展的Agentic Rollout基础设施
  • 提供了标准化的沙盒环境

方法论

通过API服务提供完整的Agentic Rollout生命周期管理,利用RL训练提升LLM Agent在复杂任务中的长期行为。

原文摘要

Multi-turn LLM agents are increasingly important for solving complex, interactive tasks, and reinforcement learning (RL) is a key ingredient for improving their long-horizon behavior. However, RL training requires generating large numbers of sandboxed rollout trajectories, and existing infrastructures often couple rollout orchestration with the training loop, making systems hard to migrate and maintain. Under the rollout-as-a-service philosophy, we present ProRL Agent , a scalable infrastructure that serves the full agentic rollout lifecycle through an API service. ProRL Agent also provides standardized and extensible sandbox environments that support diverse agentic tasks in rootless HPC settings. We validate ProRL Agent through RL training on software engineering, math, STEM, and coding tasks. ProRL Agent is open-sourced and integrated as part of NVIDIA NeMo Gym.

标签

LLM Agent Reinforcement Learning Rollout-as-a-Service

arXiv 分类

cs.AI