Benchmarking Vision-Language Models for French PDF-to-Markdown Conversion
AI 摘要
该论文评估了VLMs在法语PDF转Markdown上的性能,并提出了新的评估基准。
主要贡献
- 提出了法语PDF到Markdown转换的新基准
- 设计了针对具体错误模式的单元测试式评估方法
- 评估了15个VLM模型在法语文档上的性能
方法论
通过模型差异采样构建法语PDF数据集,使用单元测试评估文本存在、阅读顺序和表格约束,并进行类别特定归一化。
原文摘要
This report evaluates PDF-to-Markdown conversion using recent Vision-Language Models (VLMs) on challenging French documents. Document parsing is a critical step for Retrieval-Augmented Generation (RAG) pipelines, where transcription and layout errors propagate to downstream retrieval and grounding. Existing benchmarks often emphasize English or Chinese and can over-penalize benign formatting and linearization choices (e.g., line breaks, list segmentation, alternative table renderings) that are largely irrelevant for downstream use. We introduce a French-focused benchmark of difficult pages selected via model-disagreement sampling from a corpus of 60{,}000 documents, covering handwritten forms, complex layouts, dense tables, and graphics-rich pages. Evaluation is performed with unit-test-style checks that target concrete failure modes (text presence, reading order, and local table constraints) combined with category-specific normalization designed to discount presentation-only variance. Across 15 models, we observe substantially higher robustness for the strongest proprietary models on handwriting and forms, while several open-weights systems remain competitive on standard printed layouts.