Agent Tuning & Optimization 相关度: 6/10

FeDMRA: Federated Incremental Learning with Dynamic Memory Replay Allocation

Tiantian Wang, Xiang Xiang, Simon S. Du
arXiv: 2603.28455v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

提出一种基于动态内存分配的联邦增量学习方法,解决医疗场景下数据非IID和灾难性遗忘问题。

主要贡献

  • 提出动态内存分配策略,优化客户端存储资源
  • 针对数据异构性,实现性能公平性
  • 在医疗图像数据集上验证了有效性

方法论

基于数据回放机制,动态分配exemplar memory,平衡客户端性能,缓解灾难性遗忘,实现联邦增量学习。

原文摘要

In federated healthcare systems, Federated Class-Incremental Learning (FCIL) has emerged as a key paradigm, enabling continuous adaptive model learning among distributed clients while safeguarding data privacy. However, in practical applications, data across agent nodes within the distributed framework often exhibits non-independent and identically distributed (non-IID) characteristics, rendering traditional continual learning methods inapplicable. To address these challenges, this paper covers more comprehensive incremental task scenarios and proposes a dynamic memory allocation strategy for exemplar storage based on the data replay mechanism. This strategy fully taps into the inherent potential of data heterogeneity, while taking into account the performance fairness of all participating clients, thereby establishing a balanced and adaptive solution to mitigate catastrophic forgetting. Unlike the fixed allocation of client exemplar memory, the proposed scheme emphasizes the rational allocation of limited storage resources among clients to improve model performance. Furthermore, extensive experiments are conducted on three medical image datasets, and the results demonstrate significant performance improvements compared to existing baseline models.

标签

联邦学习 增量学习 医疗图像 非独立同分布 动态内存分配

arXiv 分类

cs.LG cs.AI cs.CV cs.DC stat.ML