Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing
AI 摘要
PaddleOCR-VL通过粗到细视觉处理,提升文档解析效率和性能,关注关键区域,抑制冗余信息。
主要贡献
- 提出Valid Region Focus Module (VRFM),聚焦文档关键区域
- 设计并训练了轻量级视觉语言模型PaddleOCR-VL-0.9B
- PaddleOCR-VL在文档解析和元素识别上达到SOTA性能
方法论
提出粗到细的视觉处理架构,通过VRFM定位关键区域,并使用轻量级视觉语言模型进行详细识别。
原文摘要
Document parsing is a fine-grained task where image resolution significantly impacts performance. While advanced research leveraging vision-language models benefits from high-resolution input to boost model performance, this often leads to a quadratic increase in the number of vision tokens and significantly raises computational costs. We attribute this inefficiency to substantial visual regions redundancy in document images, like background. To tackle this, we propose PaddleOCR-VL, a novel coarse-to-fine architecture that focuses on semantically relevant regions while suppressing redundant ones, thereby improving both efficiency and performance. Specifically, we introduce a lightweight Valid Region Focus Module (VRFM) which leverages localization and contextual relationship prediction capabilities to identify valid vision tokens. Subsequently, we design and train a compact yet powerful 0.9B vision-language model (PaddleOCR-VL-0.9B) to perform detailed recognition, guided by VRFM outputs to avoid direct processing of the entire large image. Extensive experiments demonstrate that PaddleOCR-VL achieves state-of-the-art performance in both page-level parsing and element-level recognition. It significantly outperforms existing solutions, exhibits strong competitiveness against top-tier VLMs, and delivers fast inference while utilizing substantially fewer vision tokens and parameters, highlighting the effectiveness of targeted coarse-to-fine parsing for accurate and efficient document understanding. The source code and models are publicly available at https://github.com/PaddlePaddle/PaddleOCR.