Multimodal Learning 相关度: 9/10

ExStrucTiny: A Benchmark for Schema-Variable Structured Information Extraction from Document Images

Mathieu Sibue, Andres Muñoz Garza, Samuel Mensah, Pranav Shetty, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
arXiv: 2602.12203v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

提出ExStrucTiny基准数据集,用于评估通用视觉语言模型在文档图像结构化信息抽取方面的能力。

主要贡献

  • 构建了ExStrucTiny基准数据集,包含多样文档类型和抽取场景
  • 提出了一个结合人工和合成数据的新型数据生成流程
  • 分析了现有视觉语言模型在结构化信息抽取方面的挑战

方法论

通过手动和合成相结合的方式构建数据集,并对现有VLMs进行评估,突出模型在schema适应、查询理解和定位方面的挑战。

原文摘要

Enterprise documents, such as forms and reports, embed critical information for downstream applications like data archiving, automated workflows, and analytics. Although generalist Vision Language Models (VLMs) perform well on established document understanding benchmarks, their ability to conduct holistic, fine-grained structured extraction across diverse document types and flexible schemas is not well studied. Existing Key Entity Extraction (KEE), Relation Extraction (RE), and Visual Question Answering (VQA) datasets are limited by narrow entity ontologies, simple queries, or homogeneous document types, often overlooking the need for adaptable and structured extraction. To address these gaps, we introduce ExStrucTiny, a new benchmark dataset for structured Information Extraction (IE) from document images, unifying aspects of KEE, RE, and VQA. Built through a novel pipeline combining manual and synthetic human-validated samples, ExStrucTiny covers more varied document types and extraction scenarios. We analyze open and closed VLMs on this benchmark, highlighting challenges such as schema adaptation, query under-specification, and answer localization. We hope our work provides a bedrock for improving generalist models for structured IE in documents.

标签

结构化信息抽取 文档图像理解 视觉语言模型 基准数据集

arXiv 分类

cs.CL