LLM Reasoning 相关度: 9/10

LycheeDecode: Accelerating Long-Context LLM Inference via Hybrid-Head Sparse Decoding

Gang Lin, Dongfang Li, Zhuoen Chen, Yukun Shi, Xuhui Chen, Baotian Hu, Min Zhang
arXiv: 2602.04541v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

LycheeDecode通过混合头稀疏解码加速长文本LLM推理,提升速度和质量。

主要贡献

  • 提出基于HardKuma的混合头注意力机制
  • 动态识别关键token并重用
  • 在长文本理解和推理任务上验证了加速效果

方法论

采用硬件友好的top-k选择策略,将注意力头划分为检索头和稀疏头,实现高效计算。

原文摘要

The proliferation of long-context large language models (LLMs) exposes a key bottleneck: the rapidly expanding key-value cache during decoding, which imposes heavy memory and latency costs. While recent approaches attempt to alleviate this by sharing a single set of crucial tokens across layers, such coarse-grained sharing undermines model performance by neglecting the functional diversity of attention heads. To address this, we propose LycheeDecode, an efficient decoding method centered on a fine-grained hybrid-head attention mechanism that employs a hardware-efficient top-k selection strategy. Specifically, the novel HardKuma-based mechanism partitions attention heads into a small subset of retrieval heads that dynamically identify crucial tokens and a majority of sparse heads that reuse them for efficient computation. Through extensive experiments on leading models like Llama3 and Qwen3 across diverse benchmarks for long-context understanding (e.g., LongBench, RULER) and complex reasoning (e.g., AIME24, OlympiadBench), we demonstrate that LycheeDecode achieves generative quality comparable to, and at times surpassing even the full-attention baseline. Crucially, this is accomplished with up to a 2.7x speedup at a 128K context length. By preserving the functional diversity of attention heads, our fine-grained strategy overcomes the performance bottlenecks of existing methods, providing a powerful and validated pathway to both efficient and high-quality long-context LLM inference.

标签

LLM 长文本 推理加速 注意力机制 稀疏解码

arXiv 分类

cs.CL cs.AI