Trajectory-Informed Memory Generation for Self-Improving Agent Systems
AI 摘要
提出一种轨迹信息驱动的记忆生成框架,提升Agent在复杂任务中的表现。
主要贡献
- 提出轨迹智能提取器,分析Agent推理模式
- 设计决策归因分析器,定位失败原因
- 构建上下文学习生成器,生成策略、恢复、优化建议
- 研发自适应记忆检索系统,注入相关学习
方法论
从Agent执行轨迹中提取结构化学习信息,通过多维度相似性检索,将相关知识注入Prompt,指导Agent行动。
原文摘要
LLM-powered agents face a persistent challenge: learning from their execution experiences to improve future performance. While agents can successfully complete many tasks, they often repeat inefficient patterns, fail to recover from similar errors, and miss opportunities to apply successful strategies from past executions. We present a novel framework for automatically extracting actionable learnings from agent execution trajectories and utilizing them to improve future performance through contextual memory retrieval. Our approach comprises four components: (1) a Trajectory Intelligence Extractor that performs semantic analysis of agent reasoning patterns, (2) a Decision Attribution Analyzer that identifies which decisions and reasoning steps led to failures, recoveries, or inefficiencies, (3) a Contextual Learning Generator that produces three types of guidance -- strategy tips from successful patterns, recovery tips from failure handling, and optimization tips from inefficient but successful executions, and (4) an Adaptive Memory Retrieval System that injects relevant learnings into agent prompts based on multi-dimensional similarity. Unlike existing memory systems that store generic conversational facts, our framework understands execution patterns, extracts structured learnings with provenance, and retrieves guidance tailored to specific task contexts. Evaluation on the AppWorld benchmark demonstrates consistent improvements, with up to 14.3 percentage point gains in scenario goal completion on held-out tasks and particularly strong benefits on complex tasks (28.5~pp scenario goal improvement, a 149\% relative increase).