LLM Memory & RAG 相关度: 7/10

Cost-Penalized Fitness in FMA-Orchestrated Mixture of Experts: Experimental Evidence for Molecular Memory in Domain Adaptation

Martin Jaraiz
arXiv: 2604.00812v1 发布: 2026-04-01 更新: 2026-04-01

AI 摘要

提出了一种基于成本惩罚适应度的MoE管理方法,实现了LLM在领域自适应中的“分子记忆”效应。

主要贡献

  • 提出成本惩罚适应度的MoE管理方法
  • 发现“分子记忆”效应,加速领域切换
  • 初步成本分析,估计了潜在的经济和能源效益

方法论

使用FMA编排的Transformer,通过成本惩罚适应度和线性宽限期管理专家,进行领域切换实验。

原文摘要

We present experimental results from seven controlled runs of nanoFMT, a Free-Market Algorithm (FMA) orchestrated transformer with dynamic Mixture-of-Experts (MoE) management. The experiments address a fundamental question for advanced LLM development: how should an MoE system manage its expert pool when operating at full capacity under changing data distributions? We demonstrate that cost-penalized fitness metrics, combined with a linear grace period for newborn experts, produce a system that accumulates domain expertise through diversification rather than replacement. The central result is a round-trip domain shift experiment showing 9-11x faster recovery when returning to a previously learned domain, with zero expert births or replacements required. This "molecular memory" effect -- where dormant experts survive and reactivate when their domain returns -- has no analogue in current MoE management approaches. A preliminary cost analysis estimates annual savings of $39.1M and 27.1 GWh energy reduction for an OpenAI-scale provider under a moderate scenario.

标签

MoE 领域自适应 Transformer 分子记忆 Free-Market Algorithm

arXiv 分类

cs.LG