LLM Memory & RAG 相关度: 9/10

LycheeCluster: Efficient Long-Context Inference with Structure-Aware Chunking and Hierarchical KV Indexing

Dongfang Li, Zixuan Liu, Gang Lin, Baotian Hu, Min Zhang
arXiv: 2603.08453v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

LycheeCluster通过结构感知分块和分层KV索引,高效处理长文本上下文的LLM推理。

主要贡献

  • 提出了基于结构感知分块的KV缓存管理方法
  • 构建了基于三角不等式的递归分层索引
  • 通过实验验证了速度提升和性能保持

方法论

采用边界感知分块保留语义连贯性,构建递归分层索引,将线性扫描变为对数时间修剪过程。

原文摘要

The quadratic complexity of the attention mechanism and the substantial memory footprint of the Key-Value (KV) cache present severe computational and memory challenges for Large Language Models (LLMs) processing long contexts. Existing retrieval-based methods often compromise semantic integrity through fixed-size chunking and suffer from inefficient linear scanning. In this paper, we propose LycheeCluster, a novel method for efficient KV cache management. LycheeCluster preserves local semantic coherence via boundary-aware chunking and constructs a recursive hierarchical index rooted in the triangle inequality. This design transforms cache retrieval from a linear scan into a theoretically bounded, logarithmic-time pruning process, while a lazy update strategy supports efficient streaming generation. Experiments demonstrate that LycheeCluster achieves up to a 3.6x end-to-end inference speedup with negligible degradation in model performance, outperforming state-of-the-art KV cache management methods (e.g., Quest, ClusterKV). We will release our code and kernels after publication.

标签

长文本 KV缓存 分层索引 LLM推理

arXiv 分类

cs.LG cs.AI cs.CL