Agent Tuning & Optimization 相关度: 8/10

AdaptEvolve: Improving Efficiency of Evolutionary AI Agents through Adaptive Model Selection

Pretam Ray, Pratik Prabhanjan Brahma, Zicheng Liu, Emad Barsoum
arXiv: 2602.11931v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

AdaptEvolve通过置信度驱动的LLM选择,在进化智能体中实现了计算效率和性能的平衡。

主要贡献

  • 提出了一种自适应LLM选择框架AdaptEvolve
  • 利用生成置信度估计实时可解性
  • 显著降低了推理成本,同时保持了较高的准确率

方法论

AdaptEvolve在进化序列细化框架中,通过LLM的生成置信度动态选择合适的LLM进行推理。

原文摘要

Evolutionary agentic systems intensify the trade-off between computational efficiency and reasoning capability by repeatedly invoking large language models (LLMs) during inference. This setting raises a central question: how can an agent dynamically select an LLM that is sufficiently capable for the current generation step while remaining computationally efficient? While model cascades offer a practical mechanism for balancing this trade-off, existing routing strategies typically rely on static heuristics or external controllers and do not explicitly account for model uncertainty. We introduce AdaptEvolve: Adaptive LLM Selection for Multi-LLM Evolutionary Refinement within an evolutionary sequential refinement framework that leverages intrinsic generation confidence to estimate real-time solvability. Empirical results show that confidence-driven selection yields a favourable Pareto frontier, reducing total inference cost by an average of 37.9% across benchmarks while retaining 97.5% of the upper-bound accuracy of static large-model baselines. Our code is available at https://github.com/raypretam/adaptive_llm_selection.

标签

AI Agents LLM Selection Evolutionary Algorithms

arXiv 分类

cs.CL cs.AI