ZeroSense:How Vision matters in Long Context Compression
AI 摘要
论文提出一种解耦评估框架和ZeroSense基准,用于更准确评估视觉文本压缩的质量。
主要贡献
- 提出解耦评估框架,消除下游模型语义推断的影响
- 构建ZeroSense基准,确保测试样本低语义相关性
- 揭示VTC质量与下游任务准确率的显著差异
方法论
通过解耦MLLM能力,设计低语义相关的测试样本,建立新的VTC质量评估方法。
原文摘要
Recent visual-text compression (VTC) methods, typified by DeepSeek-OCR, report impressive high token compression ratios for long-context modeling tasks by leveraging text-to-image rendering. However, existing evaluation protocols heavily rely on downstream task performance. Such evaluation metrics fail to accurately measure text preservation due to the strong inherent linguistic priors of Multimodal Large Language Models (MLLMs). In this work, we introduce a new evaluation framework that decouples MLLMs' capabilities to faithfully assess VTC quality. Within this framework, we further introduce the ZeroSense Benchmark to ensure low semantic correlation of testing samples. By eliminating contextual dependencies, our benchmark guarantees that the evaluation results are purely reflective of VTC quality, unaffected by the semantic inference capabilities of downstream models. Extensive experiments across multiple datasets demonstrate that VTC quality and downstream task accuracy diverge significantly, highlighting the necessity of our decoupled evaluation framework.