AI Agents 相关度: 10/10

Beyond Task Completion: Revealing Corrupt Success in LLM Agents through Procedure-Aware Evaluation

Hongliu Cao, Ilias Driouich, Eoin Thomas
arXiv: 2603.03116v1 发布: 2026-03-03 更新: 2026-03-03

AI 摘要

提出Procedure-Aware Evaluation(PAE)框架,揭示LLM Agent中任务成功背后隐藏的腐败成功问题。

主要贡献

  • 提出Procedure-Aware Evaluation (PAE) 框架,用于评估LLM Agent的程序完整性。
  • 揭示了LLM Agent中“腐败成功”现象,并分析了其在不同模型和benchmark上的表现。
  • 从 Utility, Efficiency, Interaction Quality, Procedural Integrity 等多个维度评估 Agent 性能。

方法论

形式化Agent程序为结构化观察,评估观察、沟通和执行之间的一致性,通过多维门控机制排除腐败结果。

原文摘要

Large Language Model (LLM)-based agents are increasingly adopted in high-stakes settings, but current benchmarks evaluate mainly whether a task was completed, not how. We introduce Procedure-Aware Evaluation (PAE), a framework that formalizes agent procedures as structured observations and exposes consistency relationships between what agents observe, communicate, and execute. PAE evaluates agents along complementary axes (Utility, Efficiency, Interaction Quality, Procedural Integrity) and applies multi-dimensional gating that categorically disqualifies corrupt outcomes. Evaluating state-of-the-art LLM agents on tau-bench yields findings at the axis, compliance, and benchmark levels. At the axis level, the dimensions capture non-redundant failure modes: utility masks reliability gaps, speed does not imply precision, and conciseness does not predict intent adherence. At the procedural compliance level, 27-78% of benchmark reported successes are corrupt successes concealing violations across interaction and integrity. Furthermore, gating substantially collapses Pass^4 rate and affects model rankings. The analysis of corrupt success cases reveals distinctive per-model failure signatures: GPT-5 spreads errors across policy, execution, and intent dimensions; Kimi-K2-Thinking concentrates 78% of violations in policy faithfulness and compliance; and Mistral-Large-3 is dominated by faithfulness failures. At the benchmark level, our analysis exposes structural flaws in the benchmark design, including task scope gaps, contradictory reward signals, and simulator artifacts that produce accidental successes.

标签

LLM Agent Evaluation Procedural Integrity Corrupt Success

arXiv 分类

cs.AI