LLM Reasoning 相关度: 8/10

Training-Free Dynamic Upcycling of Expert Language Models

Eros Fanì, Oğuzhan Ersoy
arXiv: 2603.29765v1 发布: 2026-03-31 更新: 2026-03-31

AI 摘要

DUME通过动态组合领域专家模型,无需额外训练即可构建多任务MoE模型,提升性能。

主要贡献

  • 提出了一种新的动态Upcycling MoE (DUME) 方法
  • 无需额外训练即可构建多任务模型
  • 在因果语言建模和推理设置中优于基线方法

方法论

DUME 通过 Ridge 回归的闭式解,动态添加领域专家模型,构建 MoE 结构,无需进一步优化。

原文摘要

Large Language Models (LLMs) have achieved remarkable performance on a wide range of specialized tasks, exhibiting strong problem-solving capabilities. However, training these models is prohibitively expensive, and they often lack domain-specific expertise because they rely on general knowledge datasets. Expertise finetuning can address this issue; however, it often leads to overspecialization, and developing a single multi-domain expert remains difficult due to diverging objectives. Furthermore, multitask training is challenging due to interference and catastrophic forgetting. Existing work proposes combining the expertise of dense models within a Mixture of Experts (MoE) architecture, although this approach still requires multitask finetuning. To address these issues, we introduce Dynamic Upcycling MoE (DUME), a novel approach that reuses dense experts trained on different domains to construct a unified MoE model. Our method builds a single multitask model that preserves the capabilities of the original dense experts without requiring additional training. DUME is both cost-efficient and scalable: by leveraging the closed-form solution of ridge regression, it eliminates the need for further optimization and enables experts to be added dynamically while maintaining the model's original performance. We demonstrate that DUME consistently outperforms baseline approaches in both causal language modeling and reasoning settings. Finally, we also show that the DUME model can be fine-tuned to further improve performance. We show that, in the causal language modeling setting, DUME can retain up to 97.6% of a dense expert model specialized in one particular domain, and that it can also surpass it in the reasoning setting, where it can achieve 102.1% of the dense expert performance. Our code is available at: github.com/gensyn-ai/dume.

标签

MoE Transfer Learning Fine-tuning Multitask Learning

arXiv 分类

cs.LG cs.CL