AI Agents 相关度: 7/10

Finding Structure in Continual Learning

Pourya Shamsolmoali, Masoumeh Zareapoor
arXiv: 2602.04555v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

使用Douglas-Rachford Splitting (DRS)重构持续学习目标,平衡稳定性和可塑性。

主要贡献

  • 提出基于DRS的持续学习框架
  • 解耦可塑性和稳定性的目标
  • 无需额外模块或复杂附加组件

方法论

使用DRS将持续学习目标分解为可塑性和稳定性两个独立的优化目标,通过近端算子迭代寻找共识。

原文摘要

Learning from a stream of tasks usually pits plasticity against stability: acquiring new knowledge often causes catastrophic forgetting of past information. Most methods address this by summing competing loss terms, creating gradient conflicts that are managed with complex and often inefficient strategies such as external memory replay or parameter regularization. We propose a reformulation of the continual learning objective using Douglas-Rachford Splitting (DRS). This reframes the learning process not as a direct trade-off, but as a negotiation between two decoupled objectives: one promoting plasticity for new tasks and the other enforcing stability of old knowledge. By iteratively finding a consensus through their proximal operators, DRS provides a more principled and stable learning dynamic. Our approach achieves an efficient balance between stability and plasticity without the need for auxiliary modules or complex add-ons, providing a simpler yet more powerful paradigm for continual learning systems.

标签

Continual Learning Douglas-Rachford Splitting Optimization Stability Plasticity

arXiv 分类

cs.LG