Retraining as Approximate Bayesian Inference
AI 摘要
将模型重训练视为计算约束下的近似贝叶斯推断,提出了基于决策理论的重训练策略。
主要贡献
- 提出了将重训练理解为近似贝叶斯推断的视角
- 建立了基于决策理论的重训练框架
- 提供了基于证据的重训练触发机制
方法论
使用贝叶斯推断和决策理论,将重训练决策建模为成本最小化问题。
原文摘要
Model retraining is usually treated as an ongoing maintenance task. But as Harrison Katz now argues, retraining can be better understood as approximate Bayesian inference under computational constraints. The gap between a continuously updated belief state and your frozen deployed model is "learning debt," and the retraining decision is a cost minimization problem with a threshold that falls out of your loss function. In this article Katz provides a decision-theoretic framework for retraining policies. The result is evidence-based triggers that replace calendar schedules and make governance auditable. For readers less familiar with the Bayesian and decision-theoretic language, key terms are defined in a glossary at the end of the article.