Agent Tuning & Optimization 相关度: 9/10

RewardFlow: Topology-Aware Reward Propagation on State Graphs for Agentic RL with Large Language Models

Xiao Feng, Bo Han, Zhanke Zhou, Jiaqi Fan, Jiangchao Yao, Ka Ho Li, Dahai Yu, Michael Kwok-Po Ng
arXiv: 2603.18859v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

RewardFlow通过状态图拓扑感知奖励传播,提升LLM Agent在稀疏奖励环境下的推理能力。

主要贡献

  • 提出了一种轻量级的状态级奖励估计方法RewardFlow
  • 利用状态图的拓扑结构分析状态对成功的影响
  • 将RewardFlow作为密集奖励,显著提升了Agent的RL优化效果

方法论

构建状态图,分析状态贡献度,通过图传播获得客观的状态级奖励,用于RL优化Agent。

原文摘要

Reinforcement learning (RL) holds significant promise for enhancing the agentic reasoning capabilities of large language models (LLMs) with external environments. However, the inherent sparsity of terminal rewards hinders fine-grained, state-level optimization. Although process reward modeling offers a promising alternative, training dedicated reward models often entails substantial computational costs and scaling difficulties. To address these challenges, we introduce RewardFlow, a lightweight method for estimating state-level rewards tailored to agentic reasoning tasks. RewardFlow leverages the intrinsic topological structure of states within reasoning trajectories by constructing state graphs. This enables an analysis of state-wise contributions to success, followed by topology-aware graph propagation to quantify contributions and yield objective, state-level rewards. When integrated as dense rewards for RL optimization, RewardFlow substantially outperforms prior RL baselines across four agentic reasoning benchmarks, demonstrating superior performance, robustness, and training efficiency. The implementation of RewardFlow is publicly available at https://github.com/tmlr-group/RewardFlow.

标签

强化学习 大语言模型 Agent 奖励塑形 状态图

arXiv 分类

cs.AI cs.CL cs.LG