Representation Stability in a Minimal Continual Learning Agent
AI 摘要
研究最小化持续学习Agent的表征稳定性,揭示了表征的塑性和稳定性的权衡。
主要贡献
- 设计了一个最小持续学习Agent
- 量化了表征变化并定义了稳定性指标
- 揭示了在没有显式正则化等手段下表征的塑性和稳定性的权衡
方法论
设计一个维持状态向量的agent,通过引入文本数据增量更新。使用余弦相似度量化表征变化,并进行纵向实验。
原文摘要
Continual learning systems are increasingly deployed in environments where retraining or reset is infeasible, yet many approaches emphasize task performance rather than the evolution of internal representations over time. In this work, we study a minimal continual learning agent designed to isolate representational dynamics from architectural complexity and optimization objectives. The agent maintains a persistent state vector across executions and incrementally updates it as new textual data is introduced. We quantify representational change using cosine similarity between successive normalized state vectors and define a stability metric over time intervals. Longitudinal experiments across eight executions reveal a transition from an initial plastic regime to a stable representational regime under consistent input. A deliberately introduced semantic perturbation produces a bounded decrease in similarity, followed by recovery and restabilization under subsequent coherent input. These results demonstrate that meaningful stability plasticity tradeoffs can emerge in a minimal, stateful learning system without explicit regularization, replay, or architectural complexity. The work establishes a transparent empirical baseline for studying representational accumulation and adaptation in continual learning systems.