LLM Memory & RAG 相关度: 7/10

HySparse: A Hybrid Sparse Attention Architecture with Oracle Token Selection and KV Cache Sharing

Yizhao Gao, Jianyu Wei, Qihao Zhang, Yu Cheng, Shimao Chen, Zhengju Tang, Zihan Jiang, Yifan Song, Hailin Zhang, Liang Zhao, Bo Yang, Gang Wang, Shijie Cao, Fuli Luo
arXiv: 2602.03560v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

HySparse通过全注意力层引导稀疏注意力,有效减少计算和内存开销并提升性能。

主要贡献

  • 提出HySparse架构,交错全注意力和稀疏注意力层
  • 使用全注意力层作为oracle进行token选择
  • 稀疏注意力层复用全注意力层的KV cache

方法论

将全注意力层与稀疏注意力层交错,利用全注意力层指导稀疏注意力层的token选择和KV cache共享。

原文摘要

This work introduces Hybrid Sparse Attention (HySparse), a new architecture that interleaves each full attention layer with several sparse attention layers. While conceptually simple, HySparse strategically derives each sparse layer's token selection and KV caches directly from the preceding full attention layer. This architecture resolves two fundamental limitations of prior sparse attention methods. First, conventional approaches typically rely on additional proxies to predict token importance, introducing extra complexity and potentially suboptimal performance. In contrast, HySparse uses the full attention layer as a precise oracle to identify important tokens. Second, existing sparse attention designs often reduce computation without saving KV cache. HySparse enables sparse attention layers to reuse the full attention KV cache, thereby reducing both computation and memory. We evaluate HySparse on both 7B dense and 80B MoE models. Across all settings, HySparse consistently outperforms both full attention and hybrid SWA baselines. Notably, in the 80B MoE model with 49 total layers, only 5 layers employ full attention, yet HySparse achieves substantial performance gains while reducing KV cache storage by nearly 10x.

标签

Sparse Attention KV Cache Large Language Models Efficiency

arXiv 分类

cs.CL cs.AI