AI Agents 相关度: 9/10

Anticipatory Planning for Multimodal AI Agents

Yongyuan Liang, Shijie Zhou, Yu Gu, Hao Tan, Gang Wu, Franck Dernoncourt, Jihyung Kil, Ryan A. Rossi, Ruiyi Zhang
arXiv: 2603.16777v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

提出TraceR1框架,通过预测轨迹进行预见性推理,提升多模态Agent的规划能力和执行鲁棒性。

主要贡献

  • 提出TraceR1框架,显式训练预见性推理
  • 两阶段强化学习,提升规划一致性和执行准确性
  • 在多个benchmark上验证了TraceR1的有效性

方法论

采用两阶段强化学习,先用轨迹级强化学习保证全局一致性,再用执行反馈微调步级准确性。

原文摘要

Recent advances in multimodal agents have improved computer-use interaction and tool-usage, yet most existing systems remain reactive, optimizing actions in isolation without reasoning about future states or long-term goals. This limits planning coherence and prevents agents from reliably solving high-level, multi-step tasks. We introduce TraceR1, a two-stage reinforcement learning framework that explicitly trains anticipatory reasoning by forecasting short-horizon trajectories before execution. The first stage performs trajectory-level reinforcement learning with rewards that enforce global consistency across predicted action sequences. The second stage applies grounded reinforcement fine-tuning, using execution feedback from frozen tool agents to refine step-level accuracy and executability. TraceR1 is evaluated across seven benchmarks, covering online computer-use, offline computer-use benchmarks, and multimodal tool-use reasoning tasks, where it achieves substantial improvements in planning stability, execution robustness, and generalization over reactive and single-stage baselines. These results show that anticipatory trajectory reasoning is a key principle for building multimodal agents that can reason, plan, and act effectively in complex real-world environments.

标签

AI Agent Multimodal Learning Reinforcement Learning Planning Anticipatory Reasoning

arXiv 分类

cs.AI