LLM Memory & RAG 相关度: 7/10

Routing without Forgetting

Alessio Masano, Giovanni Bellitto, Dipam Goswani, Joost Van de Weijer, Concetto Spampinato
arXiv: 2603.09576v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

提出了一种名为RwF的Transformer架构,通过能量模型的关联检索层实现在线持续学习中的动态路由。

主要贡献

  • 提出Routing without Forgetting (RwF)架构
  • 利用能量模型的关联检索实现动态路由
  • 在在线持续学习任务上超越现有prompt-based方法

方法论

RwF通过在Transformer的token embeddings上进行能量模型的关联检索,生成动态prompt,实现输入条件下的动态路由。

原文摘要

Continual learning in transformers is commonly addressed through parameter-efficient adaptation: prompts, adapters, or LoRA modules are specialized per task while the backbone remains frozen. Although effective in controlled multi-epoch settings, these approaches rely on gradual gradient-based specialization and struggle in Online Continual Learning (OCL), where data arrive as a non-stationary stream and each sample may be observed only once. We recast continual learning in transformers as a routing problem: under strict online constraints, the model must dynamically select the appropriate representational subspace for each input without explicit task identifiers or repeated optimization. We thus introduce Routing without Forgetting (RwF), a transformer architecture augmented with energy-based associative retrieval layers inspired by Modern Hopfield Networks. Instead of storing or merging task-specific prompts, RwF generates dynamic prompts through single-step associative retrieval over the transformer token embeddings at each layer. Retrieval corresponds to the closed-form minimization of a strictly convex free-energy functional, enabling input-conditioned routing within each forward pass, independently of iterative gradient refinement. Across challenging class-incremental benchmarks, RwF improves over existing prompt-based methods. On Split-ImageNet-R and Split-ImageNet-S, RwF outperforms prior prompt-based approaches by a large margin, even in few-shot learning regimes. These results indicate that embedding energy-based associative routing directly within the transformer backbone provides a principled and effective foundation for OCL.

标签

Continual Learning Transformers Routing Online Learning

arXiv 分类

cs.LG cs.AI