LLM Reasoning 相关度: 10/10

Data Repetition Beats Data Scaling in Long-CoT Supervised Fine-Tuning

Dawid J. Kopiczko, Sagar Vaze, Tijmen Blankevoort, Yuki M. Asano
arXiv: 2602.11149v1 发布: 2026-02-11 更新: 2026-02-11

AI 摘要

重复训练在基于思维链数据的有监督微调中优于数据规模扩大,能提升大语言模型的推理能力。

主要贡献

  • 证明了重复训练优于数据扩增在思维链微调中的作用
  • 提出了token准确率可以作为重复训练的停止标准
  • 揭示了完全记忆与泛化能力提升之间的关系

方法论

通过在AIME'24/25和GPQA基准测试上,使用Olmo3-7B模型进行不同epoch和数据规模的训练对比实验。

原文摘要

Supervised fine-tuning (SFT) on chain-of-thought data is an essential post-training step for reasoning language models. Standard machine learning intuition suggests that training with more unique training samples yields better generalization. Counterintuitively, we show that SFT benefits from repetition: under a fixed update budget, training for more epochs on smaller datasets outperforms single-epoch training on larger datasets. On AIME'24/25 and GPQA benchmarks, Olmo3-7B trained for 128 epochs on 400 samples outperforms the equivalent 1 epoch on 51200 samples by 12-26 percentage points, with no additional catastrophic forgetting. We find that training token accuracy reliably signals when repetition has saturated; improvements from additional epochs plateau at full memorization, a pattern consistent across all settings. These findings provide a practical approach for reasoning SFT, where scaling epochs with token accuracy as a stopping criterion can replace expensive undirected data scaling. We pose the repetition advantage, where full memorization coincides with improved generalization, as a new open problem for the community in understanding the training dynamics of large language models.

标签

Chain-of-Thought Supervised Fine-tuning Data Repetition Reasoning

arXiv 分类

cs.CL