A Survey of On-Policy Distillation for Large Language Models
AI 摘要
本文对LLM的On-Policy Distillation方法进行了全面综述,填补了该领域缺乏统一处理的空白。
主要贡献
- 首次全面综述了LLM的On-Policy Distillation (OPD) 方法
- 提出了一个基于f-divergence的统一框架来分析OPD
- 从反馈信号、教师访问和损失粒度三个维度组织了OPD方法
方法论
论文通过文献回顾和归纳,建立统一框架,分析不同OPD方法,并探讨工业部署及未来方向。
原文摘要
Knowledge distillation has become a primary mechanism for transferring reasoning and domain expertise from frontier Large Language Models (LLMs) to smaller, deployable students. However, the dominant paradigm remains \textit{off-policy}: students train on static teacher-generated data and never encounter their own errors during learning. This train--test mismatch, an instance of \textit{exposure bias}, causes prediction errors to compound autoregressively at inference time. On-Policy Distillation (OPD) addresses this by letting the student generate its own trajectories and receive teacher feedback on these self-generated outputs, grounding distillation in the theory of interactive imitation learning. Despite rapid growth spanning divergence minimization, reward-guided learning, and self-play, the OPD literature remains fragmented with no unified treatment. This survey provides the first comprehensive overview of OPD for LLMs. We introduce a unified $f$-divergence framework over on-policy samples and organize the landscape along three orthogonal dimensions: \emph{feedback signal} (logit-based, outcome-based, or self-play), \emph{teacher access} (white-box, black-box, or teacher-free), and \emph{loss granularity} (token-level, sequence-level, or hybrid). We systematically analyze representative methods, examine industrial deployments, and identify open problems including distillation scaling laws, uncertainty-aware feedback, and agent-level distillation.