Cubic Discrete Diffusion: Discrete Visual Generation on High-Dimensional Representation Tokens
AI 摘要
提出CubiD模型,首次实现高维离散表示的视觉生成,并验证其表示能力。
主要贡献
- 提出CubiD模型,实现高维离散表示的生成。
- 提出细粒度的掩码策略,提升模型学习能力。
- 验证离散tokens可同时用于理解和生成任务。
方法论
CubiD通过细粒度的掩码和预测策略,在离散的高维表示上进行扩散过程,从而实现视觉生成。
原文摘要
Visual generation with discrete tokens has gained significant attention as it enables a unified token prediction paradigm shared with language models, promising seamless multimodal architectures. However, current discrete generation methods remain limited to low-dimensional latent tokens (typically 8-32 dims), sacrificing the semantic richness essential for understanding. While high-dimensional pretrained representations (768-1024 dims) could bridge this gap, their discrete generation poses fundamental challenges. In this paper, we present Cubic Discrete Diffusion (CubiD), the first discrete generation model for high-dimensional representations. CubiD performs fine-grained masking throughout the high-dimensional discrete representation -- any dimension at any position can be masked and predicted from partial observations. This enables the model to learn rich correlations both within and across spatial positions, with the number of generation steps fixed at $T$ regardless of feature dimensionality, where $T \ll hwd$. On ImageNet-256, CubiD achieves state-of-the-art discrete generation with strong scaling behavior from 900M to 3.7B parameters. Crucially, we validate that these discretized tokens preserve original representation capabilities, demonstrating that the same discrete tokens can effectively serve both understanding and generation tasks. We hope this work will inspire future research toward unified multimodal architectures. Code is available at: https://github.com/YuqingWang1029/CubiD.