AI Agents 相关度: 9/10

Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents

Wenxuan Ding, Nicholas Tomlin, Greg Durrett
arXiv: 2602.16699v1 发布: 2026-02-18 更新: 2026-02-18

AI 摘要

提出Calibrate-Then-Act框架,使LLM Agent在环境探索中显式考虑成本-不确定性权衡,提升决策优化。

主要贡献

  • 提出Calibrate-Then-Act (CTA) 框架
  • 形式化信息检索和编码任务为不确定性下的序列决策问题
  • 证明CTA能帮助Agent发现更优的决策策略

方法论

通过引入先验信息到LLM Agent中,使其在行动前显式校准成本-不确定性,实现更优的环境探索。

原文摘要

LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.

标签

LLM Agents Cost-Awareness Exploration Decision-Making

arXiv 分类

cs.CL cs.AI