LLM Memory & RAG 相关度: 9/10

EchoKV: Efficient KV Cache Compression via Similarity-Based Reconstruction

Yixuan Wang, Shiyu Ji, Yijun Liu, Qingfu Zhu, Wanxiang Che
arXiv: 2603.22910v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

EchoKV提出了一种基于相似性重构的高效KV缓存压缩方案,支持标准和压缩推理之间的灵活切换。

主要贡献

  • 提出EchoKV压缩方案,实现KV缓存高效压缩
  • 利用轻量级网络重构残余KV组件
  • 提出两阶段微调策略,降低训练成本

方法论

通过轻量级网络利用层间和层内相似性重构残余KV组件,并采用两阶段微调进行优化。

原文摘要

The increasing memory demand of the Key-Value (KV) cache poses a significant bottleneck for Large Language Models (LLMs) in long-context applications. Existing low-rank compression methods often rely on irreversible parameter transformations, sacrificing the flexibility to switch back to full-precision inference when memory is abundant. In this paper, we propose EchoKV, a flexible KV cache compression scheme that enables on-demand transitions between standard and compressed inference. Unlike traditional compression-decompression paradigms, EchoKV utilizes a lightweight network to reconstruct the residual KV components from a partial subset, leveraging intrinsic inter-layer and intra-layer similarities among attention heads. We further introduce a two-stage fine-tuning strategy that allows for rapid, low-cost training (e.g., ~1 A100 GPU-hour for a 7B model). Experimental results on LongBench and RULER demonstrate that EchoKV consistently outperforms existing methods across various compression ratios while maintaining high throughput for short-context scenarios.

标签

KV Cache Compression Large Language Models Efficiency

arXiv 分类

cs.CL