Optimal Splitting of Language Models from Mixtures to Specialized Domains
AI 摘要
论文提出一种优化语言模型分裂训练的方法,通过计算分配提升模型在特定领域的性能。
主要贡献
- 提出一种预训练模型分裂训练的优化方法
- 利用 scaling laws 预测模型损失
- 实验验证方法在常识知识和推理任务上的有效性
方法论
独立预训练多个模型,利用 scaling laws 确定预训练和持续预训练之间的最佳计算分配。
原文摘要
Language models achieve impressive performance on a variety of knowledge, language, and reasoning tasks due to the scale and diversity of pretraining data available. The standard training recipe is a two-stage paradigm: pretraining first on the full corpus of data followed by specialization on a subset of high quality, specialized data from the full corpus. In the multi-domain setting, this involves continued pretraining of multiple models on each specialized domain, referred to as split model training. We propose a method for pretraining multiple models independently over a general pretraining corpus, and determining the optimal compute allocation between pretraining and continued pretraining using scaling laws. Our approach accurately predicts the loss of a model of size N with D pretraining and D' specialization tokens, and extrapolates to larger model sizes and number of tokens. Applied to language model training, our approach improves performance consistently across common sense knowledge and reasoning benchmarks across different model sizes and compute budgets.