CCTU: A Benchmark for Tool Use under Complex Constraints
AI 摘要
CCTU基准测试评估LLM在复杂约束下的工具使用能力,揭示其不足并提供未来研究方向。
主要贡献
- 提出了CCTU基准测试,用于评估LLM在复杂约束下的工具使用能力
- 构建了包含12个约束类别和200个测试用例的数据集
- 开发了可执行的约束验证模块,用于评估LLM的约束遵循情况
方法论
构建包含复杂约束的工具使用测试集,通过可执行模块验证LLM是否满足约束,并分析其违反约束的原因。
原文摘要
Solving problems through tool use under explicit constraints constitutes a highly challenging yet unavoidable scenario for large language models (LLMs), requiring capabilities such as function calling, instruction following, and self-refinement. However, progress has been hindered by the absence of dedicated evaluations. To address this, we introduce CCTU, a benchmark for evaluating LLM tool use under complex constraints. CCTU is grounded in a taxonomy of 12 constraint categories spanning four dimensions (i.e., resource, behavior, toolset, and response). The benchmark comprises 200 carefully curated and challenging test cases across diverse tool-use scenarios, each involving an average of seven constraint types and an average prompt length exceeding 4,700 tokens. To enable reliable evaluation, we develop an executable constraint validation module that performs step-level validation and enforces compliance during multi-turn interactions between models and their environments. We evaluate nine state-of-the-art LLMs in both thinking and non-thinking modes. Results indicate that when strict adherence to all constraints is required, no model achieves a task completion rate above 20%. Further analysis reveals that models violate constraints in over 50% of cases, particularly in the resource and response dimensions. Moreover, LLMs demonstrate limited capacity for self-refinement even after receiving detailed feedback on constraint violations, highlighting a critical bottleneck in the development of robust tool-use agents. To facilitate future research, we release the data and code.