InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing
AI 摘要
InternVL-U是一个轻量级的多模态统一模型,以40亿参数实现了高效的理解、推理、生成和编辑能力。
主要贡献
- 提出了轻量级多模态统一模型InternVL-U (4B)
- 采用统一上下文建模和模态特定模块化设计
- 构建了针对高语义密度任务的综合数据合成管线
方法论
InternVL-U集成了MLLM和一个基于MMDiT的视觉生成头,并通过CoT对齐用户意图和视觉生成细节。
原文摘要
Unified multimodal models (UMMs) that integrate understanding, reasoning, generation, and editing face inherent trade-offs between maintaining strong semantic comprehension and acquiring powerful generation capabilities. In this report, we present InternVL-U, a lightweight 4B-parameter UMM that democratizes these capabilities within a unified framework. Guided by the principles of unified contextual modeling and modality-specific modular design with decoupled visual representations, InternVL-U integrates a state-of-the-art Multimodal Large Language Model (MLLM) with a specialized MMDiT-based visual generation head. To further bridge the gap between aesthetic generation and high-level intelligence, we construct a comprehensive data synthesis pipeline targeting high-semantic-density tasks, such as text rendering and scientific reasoning, under a reasoning-centric paradigm that leverages Chain-of-Thought (CoT) to better align abstract user intent with fine-grained visual generation details. Extensive experiments demonstrate that InternVL-U achieves a superior performance - efficiency balance. Despite using only 4B parameters, it consistently outperforms unified baseline models with over 3x larger scales such as BAGEL (14B) on various generation and editing tasks, while retaining strong multimodal understanding and reasoning capabilities.