Multimodal Learning 相关度: 9/10

ViTaB-A: Evaluating Multimodal Large Language Models on Visual Table Attribution

Yahia Alqurnawi, Preetom Biswas, Anmol Rao, Tejas Anvekar, Chitta Baral, Vivek Gupta
arXiv: 2602.15769v1 发布: 2026-02-17 更新: 2026-02-17

AI 摘要

该论文评估了多模态大语言模型在视觉表格属性归因任务上的表现,发现其归因能力远低于问答能力。

主要贡献

  • 提出了视觉表格属性归因(ViTaB-A)的评估任务
  • 评估了不同模型在不同表格格式和提示策略下的归因能力
  • 发现现有模型在细粒度属性归因方面存在不足

方法论

构建ViTaB-A数据集,设计不同的表格格式和提示策略,评估多个多模态大语言模型在归因任务上的表现。

原文摘要

Multimodal Large Language Models (mLLMs) are often used to answer questions in structured data such as tables in Markdown, JSON, and images. While these models can often give correct answers, users also need to know where those answers come from. In this work, we study structured data attribution/citation, which is the ability of the models to point to the specific rows and columns that support an answer. We evaluate several mLLMs across different table formats and prompting strategies. Our results show a clear gap between question answering and evidence attribution. Although question answering accuracy remains moderate, attribution accuracy is much lower, near random for JSON inputs, across all models. We also find that models are more reliable at citing rows than columns, and struggle more with textual formats than images. Finally, we observe notable differences across model families. Overall, our findings show that current mLLMs are unreliable at providing fine-grained, trustworthy attribution for structured data, which limits their usage in applications requiring transparency and traceability.

标签

多模态 大语言模型 表格 属性归因 评估

arXiv 分类

cs.CL