EvoTok: A Unified Image Tokenizer via Residual Latent Evolution for Visual Understanding and Generation
AI 摘要
EvoTok提出了一种统一的图像Tokenizer,通过残差演化过程弥合视觉理解和生成之间的差距。
主要贡献
- 提出EvoTok,一种统一的图像Tokenizer。
- 通过残差向量量化实现图像的演化式表示。
- 在视觉理解和生成任务上取得了优异性能。
方法论
EvoTok通过残差向量量化将图像编码为残差token序列,该序列在不同阶段捕获不同粒度的信息,实现视觉理解和生成。
原文摘要
The development of unified multimodal large language models (MLLMs) is fundamentally challenged by the granularity gap between visual understanding and generation: understanding requires high-level semantic abstractions, while image generation demands fine-grained pixel-level representations. Existing approaches usually enforce the two supervision on the same set of representation or decouple these two supervision on separate feature spaces, leading to interference and inconsistency, respectively. In this work, we propose EvoTok, a unified image tokenizer that reconciles these requirements through a residual evolution process within a shared latent space. Instead of maintaining separate token spaces for pixels and semantics, EvoTok encodes an image into a cascaded sequence of residual tokens via residual vector quantization. This residual sequence forms an evolution trajectory where earlier stages capture low-level details and deeper stages progressively transition toward high-level semantic representations. Despite being trained on a relatively modest dataset of 13M images, far smaller than the billion-scale datasets used by many previous unified tokenizers, EvoTok achieves a strong reconstruction quality of 0.43 rFID on ImageNet-1K at 256x256 resolution. When integrated with a large language model, EvoTok shows promising performance across 7 out of 9 visual understanding benchmarks, and remarkable results on image generation benchmarks such as GenEval and GenAI-Bench. These results demonstrate that modeling visual representations as an evolving trajectory provides an effective and principled solution for unifying visual understanding and generation.