Agent Tuning & Optimization 相关度: 9/10

ProCeedRL: Process Critic with Exploratory Demonstration Reinforcement Learning for LLM Agentic Reasoning

Jingyue Gao, Yanjiang Guo, Xiaoshuai Chen, Jianyu Chen
arXiv: 2604.02006v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

ProCeedRL通过过程批评和探索性演示强化学习提升LLM Agent在复杂任务中的推理能力。

主要贡献

  • 提出了 ProCeedRL 框架
  • 引入过程级批评器实时监控交互
  • 使用基于反思的演示来指导Agent探索

方法论

ProCeedRL结合过程批评器和探索性演示强化学习,通过实时监控和反思性引导,改善Agent在复杂环境中的探索能力。

原文摘要

Reinforcement Learning (RL) significantly enhances the reasoning abilities of large language models (LLMs), yet applying it to multi-turn agentic tasks remains challenging due to the long-horizon nature of interactions and the stochasticity of environmental feedback. We identify a structural failure mode in agentic exploration: suboptimal actions elicit noisy observations into misleading contexts, which further weaken subsequent decision-making, making recovery increasingly difficult. This cumulative feedback loop of errors renders standard exploration strategies ineffective and susceptible to the model's reasoning and the environment's randomness. To mitigate this issue, we propose ProCeedRL: Process Critic with Explorative Demonstration RL, shifting exploration from passive selection to active intervention. ProCeedRL employs a process-level critic to monitor interactions in real time, incorporating reflection-based demonstrations to guide agents in stopping the accumulation of errors. We find that this approach significantly exceeds the model's saturated exploration performance, demonstrating substantial exploratory benefits. By learning from exploratory demonstrations and on-policy samples, ProCeedRL significantly improves exploration efficiency and achieves superior performance on complex deep search and embodied tasks.

标签

LLM Agent Reinforcement Learning Reasoning Exploration

arXiv 分类

cs.AI