Multimodal Learning 相关度: 9/10

HYDRA: Unifying Multi-modal Generation and Understanding via Representation-Harmonized Tokenization

Xuerui Qiu, Yutao Cui, Guozhen Zhang, Junzhe Li, JiaKui Hu, Xiao Zhang, Yang Li, Songtao Liu, Miles Yang, Yu Shi, Zhao Zhong, Liefeng Bo
arXiv: 2603.15228v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

HYDRA通过Representation-Harmonized Tokenization统一多模态生成与理解,达到新的SOTA。

主要贡献

  • 提出HYDRA-TOK,一种representation-harmonized ViT
  • 引入Generation-Semantic Bottleneck (GSB) 机制
  • 构建了原生统一框架HYDRA,集成了感知和生成
  • 在视觉重建和多项生成与理解任务上取得了SOTA

方法论

HYDRA将ViT演化为Gen-ViT到Sem-ViT的渐进学习器,通过GSB连接生成和语义特征。

原文摘要

Unified Multimodal Models struggle to bridge the fundamental gap between the abstract representations needed for visual understanding and the detailed primitives required for generation. Existing approaches typically compromise by employing decoupled encoders, stacking representation encoder atop VAEs, or utilizing discrete quantization. However, these methods often disrupt information coherence and lead to optimization conflicts. To this end, we introduce HYDRA-TOK, a representation-harmonized pure ViT in the insight that visual modeling should evolve from generation to understanding. HYDRA-TOK reformulates the standard backbone into a progressive learner that transitions from a Gen-ViT, which captures structure-preserving primitives, to a Sem-ViT for semantic encoding. Crucially, this transition is mediated by a Generation-Semantic Bottleneck (GSB), which compresses features into a low-dimensional space to filter noise for robust synthesis, then restores dimensionality to empower complex semantic comprehension. Built upon this foundation, we present HYDRA, a native unified framework integrating perception and generation within a single parameter space. Extensive experiments establish HYDRA as a new state-of-the-art. It sets a benchmark in visual reconstruction (rFID 0.08) and achieves top-tier generation performance on GenEval (0.86), DPG-Bench (86.4), and WISE (0.53), while simultaneously outperforming previous native UMMs by an average of 10.0 points across eight challenging understanding benchmarks.

标签

多模态学习 视觉理解 视觉生成 ViT 统一模型

arXiv 分类

cs.CV