Multimodal Learning 相关度: 9/10

Best of Both Worlds: Multimodal Reasoning and Generation via Unified Discrete Flow Matching

Onkar Susladkar, Tushar Prakash, Gayatri Deshmukh, Kiet A. Nguyen, Jiaxun Zhang, Adheesh Juvekar, Tianshu Bao, Lin Chai, Sparsh Mittal, Inderjit S Dhillon, Ismini Lourentzou
arXiv: 2602.12221v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

UniDFlow通过解耦理解和生成,优化多模态偏好对齐,实现多模态任务的SOTA性能。

主要贡献

  • 提出UniDFlow统一离散流匹配框架
  • 使用低秩适配器解耦理解和生成
  • 提出基于参考的多模态偏好对齐方法

方法论

使用任务特定低秩适配器解耦理解和生成,并通过参考进行偏好对齐优化结果。

原文摘要

We propose UniDFlow, a unified discrete flow-matching framework for multimodal understanding, generation, and editing. It decouples understanding and generation via task-specific low-rank adapters, avoiding objective interference and representation entanglement, while a novel reference-based multimodal preference alignment optimizes relative outcomes under identical conditioning, improving faithfulness and controllability without large-scale retraining. UniDFlpw achieves SOTA performance across eight benchmarks and exhibits strong zero-shot generalization to tasks including inpainting, in-context image generation, reference-based editing, and compositional generation, despite no explicit task-specific training.

标签

多模态学习 离散流匹配 图像生成 图像编辑

arXiv 分类

cs.CV