Agent Tuning & Optimization 相关度: 7/10

Optimizing Multilingual LLMs via Federated Learning: A Study of Client Language Composition

Aleix Sant, Jordi Luque, Carlos Escolano
arXiv: 2603.24242v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

研究了联邦学习中客户端语言构成对多语言LLM性能、公平性和效率的影响。

主要贡献

  • 扩展了FederatedScope-LLM框架以支持多语言指令调优
  • 提出了客户端特定的动态早停机制LDES-FL
  • 分析了客户端语言构成对多语言LLM的影响

方法论

通过一系列实验,改变客户端语言构成,研究其对多语言LLM的质量、公平性和训练成本的影响。

原文摘要

Federated Learning (FL) of Large Language Models (LLMs) in multilingual environments presents significant challenges stemming from heterogeneous language distributions across clients and disparities in language resource availability. To address these challenges, we extended the FederatedScope-LLM framework to support multilingual instruction-tuning experiments with LLMs. We also introduced a novel client-specific early stopping mechanism, Local Dynamic Early Stopping (LDES-FL), which allows clients to pause and resume local training based on client-side validation performance, enhancing training efficiency and sustainability. Through a series of experiments, we studied how client language composition - from fully monolingual to increasingly multilingual clients - affects multilingual quality, fairness and training cost. Monolingual local fine-tuning remains the most effective for single-language specialization, whereas federated training is better suited to learning a single balanced multilingual model. In FL, increasing within-client multilinguality leads to stronger and fairer global models, narrows the gap to centralized multilingual fine-tuning, and yields the largest gains for lower-resource languages, albeit at the cost of more optimization steps. Overall, our results identify client language composition as a key design variable in multilingual FL, shaping performance, fairness and efficiency

标签

联邦学习 多语言LLM 指令调优 客户端语言构成 早停机制

arXiv 分类

cs.CL