AI Agents 相关度: 9/10

RetailBench: Evaluating Long-Horizon Autonomous Decision-Making and Strategy Stability of LLM Agents in Realistic Retail Environments

Linghua Zhang, Jun Wang, Jingtong Wu, Zhisong Zhang
arXiv: 2603.16453v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

RetailBench评估LLM在复杂零售环境中长期自主决策能力,并提出Evolving Strategy & Execution框架。

主要贡献

  • 提出了RetailBench,一个高保真零售环境benchmark
  • 提出了Evolving Strategy & Execution框架,分离战略推理和行动执行
  • 实验证明该框架提高了LLM在长期任务中的稳定性和效率

方法论

构建零售模拟环境,评估LLM智能体长期决策能力,并提出分层框架,通过实验验证框架性能。

原文摘要

Large Language Model (LLM)-based agents have achieved notable success on short-horizon and highly structured tasks. However, their ability to maintain coherent decision-making over long horizons in realistic and dynamic environments remains an open challenge. We introduce RetailBench, a high-fidelity benchmark designed to evaluate long-horizon autonomous decision-making in realistic commercial scenarios, where agents must operate under stochastic demand and evolving external conditions. We further propose the Evolving Strategy & Execution framework, which separates high-level strategic reasoning from low-level action execution. This design enables adaptive and interpretable strategy evolution over time. It is particularly important for long-horizon tasks, where non-stationary environments and error accumulation require strategies to be revised at a different temporal scale than action execution. Experiments on eight state-of-the-art LLMs across progressively challenging environments show that our framework improves operational stability and efficiency compared to other baselines. However, performance degrades substantially as task complexity increases, revealing fundamental limitations in current LLMs for long-horizon, multi-factor decision-making.

标签

LLM Agent Long-Horizon Task Benchmark Retail

arXiv 分类

cs.AI