AI Agents 相关度: 9/10

Memory in the LLM Era: Modular Architectures and Strategies in a Unified Framework

Yanchen Wu, Tenghui Lin, Yingli Zhou, Fangyuan Zhang, Qintian Guo, Xun Zhou, Sibo Wang, Xilin Liu, Yuchi Ma, Yixiang Fang
arXiv: 2604.01707v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

系统性比较LLM Agent记忆方法,提出统一框架和新型记忆方法,并分析未来研究方向。

主要贡献

  • 提出了一个统一的LLM Agent记忆框架。
  • 全面比较了现有记忆方法的性能。
  • 设计了一种新的记忆方法,性能优于现有方法。

方法论

构建统一框架,在基准数据集上对比实验,分析现有方法,并基于分析结果设计新方法。

原文摘要

Memory emerges as the core module in the large language model (LLM)-based agents for long-horizon complex tasks (e.g., multi-turn dialogue, game playing, scientific discovery), where memory can enable knowledge accumulation, iterative reasoning and self-evolution. A number of memory methods have been proposed in the literature. However, these methods have not been systematically and comprehensively compared under the same experimental settings. In this paper, we first summarize a unified framework that incorporates all the existing agent memory methods from a high-level perspective. We then extensively compare representative agent memory methods on two well-known benchmarks and examine the effectiveness of all methods, providing a thorough analysis of those methods. As a byproduct of our experimental analysis, we also design a new memory method by exploiting modules in the existing methods, which outperforms the state-of-the-art methods. Finally, based on these findings, we offer promising future research opportunities. We believe that a deeper understanding of the behavior of existing methods can provide valuable new insights for future research.

标签

LLM Agent Memory Benchmarking Modular Architecture

arXiv 分类

cs.CL cs.DB