AI Agents 相关度: 9/10

In-Context Reinforcement Learning for Tool Use in Large Language Models

Yaoqi Ye, Yiran Zhao, Keyu Duan, Zeyu Zheng, Kenji Kawaguchi, Cihang Xie, Michael Qizhe Shieh
arXiv: 2603.08068v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

提出ICRL方法,无需SFT即可使LLM通过上下文学习有效利用工具,提升推理能力。

主要贡献

  • 提出了In-Context Reinforcement Learning (ICRL) 框架
  • 消除了对SFT的需求,减少了标注数据依赖
  • 在多个推理和工具使用基准测试上取得了SOTA性能

方法论

使用少量示例进行上下文学习,引导LLM调用外部工具,并逐步减少示例,最终实现零样本工具使用。

原文摘要

While large language models (LLMs) exhibit strong reasoning abilities, their performance on complex tasks is often constrained by the limitations of their internal knowledge. A compelling approach to overcome this challenge is to augment these models with external tools -- such as Python interpreters for mathematical computations or search engines for retrieving factual information. However, enabling models to use these tools effectively remains a significant challenge. Existing methods typically rely on cold-start pipelines that begin with supervised fine-tuning (SFT), followed by reinforcement learning (RL). These approaches often require substantial amounts of labeled data for SFT, which is expensive to annotate or synthesize. In this work, we propose In-Context Reinforcement Learning (ICRL), an RL-only framework that eliminates the need for SFT by leveraging few-shot prompting during the rollout stage of RL. Specifically, ICRL introduces in-context examples within the rollout prompts to teach the model how to invoke external tools. Furthermore, as training progresses, the number of in-context examples is gradually reduced, eventually reaching a zero-shot setting where the model learns to call tools independently. We conduct extensive experiments across a range of reasoning and tool-use benchmarks. Results show that ICRL achieves state-of-the-art performance, demonstrating its effectiveness as a scalable, data-efficient alternative to traditional SFT-based pipelines.

标签

LLM Tool Use Reinforcement Learning In-Context Learning

arXiv 分类

cs.AI