Multimodal Learning 相关度: 9/10

ICDAR 2025 Competition on End-to-End Document Image Machine Translation Towards Complex Layouts

Yaping Zhang, Yupu Liang, Zhiyang Zhang, Zhiyuan Chen, Lu Xiang, Yang Zhao, Yu Zhou, Chengqing Zong
arXiv: 2603.09392v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

ICDAR 2025 DIMT挑战赛关注复杂布局文档图像的端到端机器翻译。

主要贡献

  • 提出了DIMT挑战赛,促进多模态文档理解研究
  • 设计了OCR-free和OCR-based两种track
  • 分析了参赛结果,指出了未来研究方向

方法论

组织比赛,提供数据集,定义任务,建立评估协议,并分析参赛者的结果。

原文摘要

Document Image Machine Translation (DIMT) seeks to translate text embedded in document images from one language to another by jointly modeling both textual content and page layout, bridging optical character recognition (OCR) and natural language processing (NLP). The DIMT 2025 Challenge advances research on end-to-end document image translation, a rapidly evolving area within multimodal document understanding. The competition features two tracks, OCR-free and OCR-based, each with two subtasks for small (less than 1B parameters) and large (greater than 1B parameters) models. Participants submit a single unified DIMT system, with the option to incorporate provided OCR transcripts. Running from December 10, 2024 to April 20, 2025, the competition attracted 69 teams and 27 valid submissions in total. Track 1 had 34 teams and 13 valid submissions, while Track 2 had 35 teams and 14 valid submissions. In this report, we present the challenge motivation, dataset construction, task definitions, evaluation protocol, and a summary of results. Our analysis shows that large-model approaches establish a promising new paradigm for translating complex-layout document images and highlight substantial opportunities for future research.

标签

文档图像机器翻译 多模态学习 ICDAR

arXiv 分类

cs.CV cs.AI