Multimodal Learning 相关度: 9/10

Flash-Unified: A Training-Free and Task-Aware Acceleration Framework for Native Unified Models

Junlong Ke, Zichen Wen, Boxue Yang, Yantai Yang, Xuyang Liu, Chenfei Liao, Zhaorun Chen, Shaobo Wang, Linfeng Zhang
arXiv: 2603.15271v1 发布: 2026-03-16 更新: 2026-03-16

AI 摘要

FlashU框架通过任务感知优化,在统一模型中实现生成和理解任务的加速,且保持性能。

主要贡献

  • 首次系统分析了统一模型中的参数专业化现象
  • 提出了无需训练的任务感知加速框架FlashU
  • 在Show-o2数据集上验证了FlashU的加速效果和性能

方法论

FlashU通过任务特定网络剪枝、动态层跳跃、Diffusion Head Cache等方法,消除冗余,实现加速。

原文摘要

Native unified multimodal models, which integrate both generative and understanding capabilities, face substantial computational overhead that hinders their real-world deployment. Existing acceleration techniques typically employ a static, monolithic strategy, ignoring the fundamental divergence in computational profiles between iterative generation tasks (e.g., image generation) and single-pass understanding tasks (e.g., VQA). In this work, we present the first systematic analysis of unified models, revealing pronounced parameter specialization, where distinct neuron sets are critical for each task. This implies that, at the parameter level, unified models have implicitly internalized separate inference pathways for generation and understanding within a single architecture. Based on these insights, we introduce a training-free and task-aware acceleration framework, FlashU, that tailors optimization to each task's demands. Across both tasks, we introduce Task-Specific Network Pruning and Dynamic Layer Skipping, aiming to eliminate inter-layer and task-specific redundancy. For visual generation, we implement a time-varying control signal for the guidance scale and a temporal approximation for the diffusion head via Diffusion Head Cache. For multimodal understanding, building upon the pruned model, we introduce Dynamic Token Pruning via a V-Norm Proxy to exploit the spatial redundancy of visual inputs. Extensive experiments on Show-o2 demonstrate that FlashU achieves 1.78$\times$ to 2.01$\times$ inference acceleration across both understanding and generation tasks while maintaining SOTA performance, outperforming competing unified models and validating our task-aware acceleration paradigm. Our code is publicly available at https://github.com/Rirayh/FlashU.

标签

统一模型 模型加速 多模态学习

arXiv 分类

cs.CV