VISTA-Bench: Do Vision-Language Models Really Understand Visualized Text as Well as Pure Text?
AI 摘要
VISTA-Bench揭示了现有VLM在理解视觉化文本时存在显著的性能下降,与纯文本理解能力有较大差距。
主要贡献
- 提出了VISTA-Bench基准测试,用于评估VLM对视觉化文本的理解能力
- 发现了VLM在视觉化文本理解上的显著性能差距
- 分析了视觉化文本的感知难度对VLM性能的影响
方法论
构建包含多模态感知、推理和单模态理解任务的基准,对比VLM在纯文本和视觉化文本上的表现,并控制渲染条件。
原文摘要
Vision-Language Models (VLMs) have achieved impressive performance in cross-modal understanding across textual and visual inputs, yet existing benchmarks predominantly focus on pure-text queries. In real-world scenarios, language also frequently appears as visualized text embedded in images, raising the question of whether current VLMs handle such input requests comparably. We introduce VISTA-Bench, a systematic benchmark from multimodal perception, reasoning, to unimodal understanding domains. It evaluates visualized text understanding by contrasting pure-text and visualized-text questions under controlled rendering conditions. Extensive evaluation of over 20 representative VLMs reveals a pronounced modality gap: models that perform well on pure-text queries often degrade substantially when equivalent semantic content is presented as visualized text. This gap is further amplified by increased perceptual difficulty, highlighting sensitivity to rendering variations despite unchanged semantics. Overall, VISTA-Bench provides a principled evaluation framework to diagnose this limitation and to guide progress toward more unified language representations across tokenized text and pixels. The source dataset is available at https://github.com/QingAnLiu/VISTA-Bench.