AI Agents 相关度: 9/10

MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation

Yurui Chang, Yiran Wu, Qingyun Wu, Lu Lin
arXiv: 2603.23234v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

MemCollab通过对比不同Agent的推理轨迹构建通用的、与Agent无关的共享记忆,提升推理性能。

主要贡献

  • 提出MemCollab,一个跨Agent的协作记忆框架。
  • 使用对比学习方法从推理轨迹中蒸馏Agent无关的知识。
  • 引入任务感知的检索机制,提升记忆访问效率。

方法论

通过对比不同Agent的推理轨迹,学习任务相关的约束,构建共享记忆,并使用任务感知的检索机制。

原文摘要

Large language model (LLM)-based agents rely on memory mechanisms to reuse knowledge from past problem-solving experiences. Existing approaches typically construct memory in a per-agent manner, tightly coupling stored knowledge to a single model's reasoning style. In modern deployments with heterogeneous agents, a natural question arises: can a single memory system be shared across different models? We found that naively transferring memory between agents often degrades performance, as such memory entangles task-relevant knowledge with agent-specific biases. To address this challenge, we propose MemCollab, a collaborative memory framework that constructs agent-agnostic memory by contrasting reasoning trajectories generated by different agents on the same task. This contrastive process distills abstract reasoning constraints that capture shared task-level invariants while suppressing agent-specific artifacts. We further introduce a task-aware retrieval mechanism that conditions memory access on task category, ensuring that only relevant constraints are used at inference time. Experiments on mathematical reasoning and code generation benchmarks demonstrate that MemCollab consistently improves both accuracy and inference-time efficiency across diverse agents, including cross-modal-family settings. Our results show that the collaboratively constructed memory can function as a shared reasoning resource for diverse LLM-based agents.

标签

LLM Agent Memory Contrastive Learning Reasoning

arXiv 分类

cs.AI cs.LG