LLM Reasoning 相关度: 9/10

Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement Learning

Zhuoen Chen, Dongfang Li, Meishan Zhang, Baotian Hu, Min Zhang
arXiv: 2602.08382v1 发布: 2026-02-09 更新: 2026-02-09

AI 摘要

提出一种基于压缩记忆和强化学习的LLM长文本推理框架,提升效率和扩展上下文长度。

主要贡献

  • 提出了一种基于chunk-wise压缩和选择性记忆召回的长文本推理框架
  • 使用强化学习联合优化压缩器和推理器
  • 实验证明该方法在多跳推理任务上具有竞争力,并能扩展上下文长度

方法论

将长输入分割成块并压缩成记忆,使用门控模块选择相关记忆块,并通过强化学习优化压缩器和推理器。

原文摘要

Large Language Models (LLMs) face significant challenges in long-context processing, including quadratic computational costs, information forgetting, and the context fragmentation inherent in retrieval-augmented generation (RAG). We propose a cognitively inspired framework for efficient long-context inference based on chunk-wise compression and selective memory recall, rather than processing all raw tokens. The framework segments long inputs into chunks and encodes each chunk into compressed memory representations using a learned compressor. A gating module dynamically selects relevant memory blocks, which are then iteratively processed by a reasoning module with an evolving working memory to solve downstream tasks. The compressor and reasoner are jointly optimized via end-to-end reinforcement learning, while the gating module is trained separately as a classifier. Experimental results show that the proposed method achieves competitive accuracy on multi-hop reasoning benchmarks such as RULER-HQA, extrapolates context length from 7K to 1.75M tokens, and offers a favorable accuracy-efficiency trade-off compared to strong long-context baselines. In particular, it achieves up to a 2 times reduction in peak GPU memory usage and a 6 times inference speedup over MemAgent.

标签

长文本推理 压缩记忆 强化学习 多跳推理

arXiv 分类

cs.CL cs.AI