Multimodal Learning 相关度: 9/10

UniCom: Unified Multimodal Modeling via Compressed Continuous Semantic Representations

Yaqi Zhao, Wang Lin, Zijian Zhang, Miles Yang, Jingyuan Chen, Wentao Zhang, Zhao Zhong, Liefeng Bo
arXiv: 2603.10702v1 发布: 2026-03-11 更新: 2026-03-11

AI 摘要

UniCom通过压缩连续语义表示,统一多模态理解与生成,实现卓越的图像编辑可控性。

主要贡献

  • 提出了基于压缩连续表示的统一多模态框架UniCom
  • 证明了降低通道维度比空间下采样更有效
  • 验证了Transfusion架构在收敛性和一致性方面的优势

方法论

设计了基于注意力机制的语义压缩器,将密集特征提炼为紧凑的统一表示,并采用Transfusion架构。

原文摘要

Current unified multimodal models typically rely on discrete visual tokenizers to bridge the modality gap. However, discretization inevitably discards fine-grained semantic information, leading to suboptimal performance in visual understanding tasks. Conversely, directly modeling continuous semantic representations (e.g., CLIP, SigLIP) poses significant challenges in high-dimensional generative modeling, resulting in slow convergence and training instability. To resolve this dilemma, we introduce UniCom, a unified framework that harmonizes multimodal understanding and generation via compressed continuous representation. We empirically demonstrate that reducing channel dimension is significantly more effective than spatial downsampling for both reconstruction and generation. Accordingly, we design an attention-based semantic compressor to distill dense features into a compact unified representation. Furthermore, we validate that the transfusion architecture surpasses query-based designs in convergence and consistency. Experiments demonstrate that UniCom achieves state-of-the-art generation performance among unified models. Notably, by preserving rich semantic priors, it delivers exceptional controllability in image editing and maintains image consistency even without relying on VAE.

标签

多模态学习 图像生成 图像编辑 语义压缩

arXiv 分类

cs.CV