Multimodal Learning 相关度: 9/10

VTC-Bench: Evaluating Agentic Multimodal Models via Compositional Visual Tool Chaining

Xuanyu Zhu, Yuhao Dong, Rundong Wang, Yang Shi, Zhipeng Wu, Yinlun Peng, YiFan Zhang, Yihang Lou, Yuanxing Zhang, Ziwei Liu, Yan Bai, Yuan Zhou
arXiv: 2603.15030v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

VTC-Bench是一个评估MLLM工具使用能力的综合基准,包含32种OpenCV工具和680个问题。

主要贡献

  • 提出了VTC-Bench,一个用于评估MLLM工具使用能力的基准。
  • VTC-Bench包含32种OpenCV工具,支持复杂的工具组合和长时程规划。
  • 通过实验揭示了现有MLLM在视觉Agent能力上的局限性,尤其是在工具泛化和组合方面。

方法论

构建包含32种OpenCV工具的基准,设计680个具有ground-truth执行轨迹的问题,评估19个MLLM的性能。

原文摘要

Recent advancements extend Multimodal Large Language Models (MLLMs) beyond standard visual question answering to utilizing external tools for advanced visual tasks. Despite this progress, precisely executing and effectively composing diverse tools for complex tasks remain persistent bottleneck. Constrained by sparse tool-sets and simple tool-use trajectories, existing benchmarks fail to capture complex and diverse tool interactions, falling short in evaluating model performance under practical, real-world conditions. To bridge this gap, we introduce VisualToolChain-Bench~(VTC-Bench), a comprehensive benchmark designed to evaluate tool-use proficiency in MLLMs. To align with realistic computer vision pipelines, our framework features 32 diverse OpenCV-based visual operations. This rich tool-set enables extensive combinations, allowing VTC-Bench to rigorously assess multi-tool composition and long-horizon, multi-step plan execution. For precise evaluation, we provide 680 curated problems structured across a nine-category cognitive hierarchy, each with ground-truth execution trajectories. Extensive experiments on 19 leading MLLMs reveal critical limitations in current models' visual agentic capabilities. Specifically, models struggle to adapt to diverse tool-sets and generalize to unseen operations, with the leading model Gemini-3.0-Pro only achieving 51\% on our benchmark. Furthermore, multi-tool composition remains a persistent challenge. When facing complex tasks, models struggle to formulate efficient execution plans, relying heavily on a narrow, suboptimal subset of familiar functions rather than selecting the optimal tools. By identifying these fundamental challenges, VTC-Bench establishes a rigorous baseline to guide the development of more generalized visual agentic models.

标签

Multimodal Learning AI Agents Benchmarking

arXiv 分类

cs.AI