Agent Tuning & Optimization 相关度: 5/10

Unsupervised Continual Learning for Amortized Bayesian Inference

Aayush Mishra, Šimon Kucharský, Paul-Christian Bürkner
arXiv: 2602.22884v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

提出一种无监督持续学习框架,用于提升摊销贝叶斯推理在序列数据上的性能。

主要贡献

  • 提出无监督持续学习的ABI框架
  • 引入基于episodic replay的SC训练
  • 引入基于elastic weight consolidation的SC训练

方法论

结合模拟数据预训练和无监督自洽(SC)微调,并使用episodic replay和elastic weight consolidation应对灾难性遗忘。

原文摘要

Amortized Bayesian Inference (ABI) enables efficient posterior estimation using generative neural networks trained on simulated data, but often suffers from performance degradation under model misspecification. While self-consistency (SC) training on unlabeled empirical data can enhance network robustness, current approaches are limited to static, single-task settings and fail to handle sequentially arriving data or distribution shifts. We propose a continual learning framework for ABI that decouples simulation-based pre-training from unsupervised sequential SC fine-tuning on real-world data. To address the challenge of catastrophic forgetting, we introduce two adaptation strategies: (1) SC with episodic replay, utilizing a memory buffer of past observations, and (2) SC with elastic weight consolidation, which regularizes updates to preserve task-critical parameters. Across three diverse case studies, our methods significantly mitigate forgetting and yield posterior estimates that outperform standard simulation-based training, achieving estimates closer to MCMC reference, providing a viable path for trustworthy ABI across a range of different tasks.

标签

持续学习 贝叶斯推理 无监督学习

arXiv 分类

stat.ML cs.LG