AI Agents 相关度: 8/10

Social Hippocampus Memory Learning

Liping Yi, Zhiming Zhao, Qinghua Hu
arXiv: 2603.25614v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

SoHip利用海马体机制,通过记忆共享实现异构代理之间的协作学习,保护隐私并提升性能。

主要贡献

  • 提出SoHip框架,通过记忆共享而非模型共享进行协作学习
  • 引入海马体启发的机制进行长期记忆的整合和融合
  • 提供了收敛性和隐私保护的理论分析

方法论

SoHip从局部表示中提取短期记忆,通过海马体机制整合为长期记忆,并与集体长期记忆融合,增强局部预测。

原文摘要

Social learning highlights that learning agents improve not in isolation, but through interaction and structured knowledge exchange with others. When introduced into machine learning, this principle gives rise to social machine learning (SML), where multiple agents collaboratively learn by sharing abstracted knowledge. Federated learning (FL) provides a natural collaboration substrate for this paradigm, yet existing heterogeneous FL approaches often rely on sharing model parameters or intermediate representations, which may expose sensitive information and incur additional overhead. In this work, we propose SoHip (Social Hippocampus Memory Learning), a memory-centric social machine learning framework that enables collaboration among heterogeneous agents via memory sharing rather than model sharing. SoHip abstracts each agent's individual short-term memory from local representations, consolidates it into individual long-term memory through a hippocampus-inspired mechanism, and fuses it with collectively aggregated long-term memory to enhance local prediction. Throughout the process, raw data and local models remain on-device, while only lightweight memory are exchanged. We provide theoretical analysis on convergence and privacy preservation properties. Experiments on two benchmark datasets with seven baselines demonstrate that SoHip consistently outperforms existing methods, achieving up to 8.78% accuracy improvements.

标签

social learning federated learning memory sharing privacy preservation

arXiv 分类

cs.LG