MemoryArena: Benchmarking Agent Memory in Interdependent Multi-Session Agentic Tasks
AI 摘要
提出了MemoryArena,一个多会话Agent任务评估平台,用于评估Agent在实际场景中的记忆能力。
主要贡献
- 提出了MemoryArena评估框架
- 设计了明确依赖子任务的Agent任务
- 揭示了现有记忆评估方法的局限性
方法论
构建了一个包含多种Agent任务的统一评估环境,涉及Web导航、规划、信息搜索和形式推理,用于测试Agent的记忆和行动的耦合。
原文摘要
Existing evaluations of agents with memory typically assess memorization and action in isolation. One class of benchmarks evaluates memorization by testing recall of past conversations or text but fails to capture how memory is used to guide future decisions. Another class focuses on agents acting in single-session tasks without the need for long-term memory. However, in realistic settings, memorization and action are tightly coupled: agents acquire memory while interacting with the environment, and subsequently rely on that memory to solve future tasks. To capture this setting, we introduce MemoryArena, a unified evaluation gym for benchmarking agent memory in multi-session Memory-Agent-Environment loops. The benchmark consists of human-crafted agentic tasks with explicitly interdependent subtasks, where agents must learn from earlier actions and feedback by distilling experiences into memory, and subsequently use that memory to guide later actions to solve the overall task. MemoryArena supports evaluation across web navigation, preference-constrained planning, progressive information search, and sequential formal reasoning, and reveals that agents with near-saturated performance on existing long-context memory benchmarks like LoCoMo perform poorly in our agentic setting, exposing a gap in current evaluations for agents with memory.