LLM Reasoning 相关度: 8/10

RRAttention: Dynamic Block Sparse Attention via Per-Head Round-Robin Shifts for Long-Context Inference

Siran Liu, Guoxia Wang, Sa Wang, Jinle Zeng, HaoYang Xie, Siyu Lou, JiaBin Yang, DianHai Yu, Haifeng Wang, Chao Yang
arXiv: 2602.05853v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

RRAttention提出了一种新颖的动态稀疏注意力机制,通过head round-robin采样实现高效长文本推理。

主要贡献

  • 提出RRAttention,一种新的动态稀疏注意力方法
  • 通过head round-robin采样策略实现高效的全局模式发现
  • 在长文本和多模态任务上验证了RRAttention的有效性

方法论

RRAttention利用head round-robin策略进行查询采样,通过步长级别的聚合实现高效全局模式发现,并采用自适应Top-$τ$选择优化稀疏性。

原文摘要

The quadratic complexity of attention mechanisms poses a critical bottleneck for large language models processing long contexts. While dynamic sparse attention methods offer input-adaptive efficiency, they face fundamental trade-offs: requiring preprocessing, lacking global evaluation, violating query independence, or incurring high computational overhead. We present RRAttention, a novel dynamic sparse attention method that simultaneously achieves all desirable properties through a head \underline{r}ound-\underline{r}obin (RR) sampling strategy. By rotating query sampling positions across attention heads within each stride, RRAttention maintains query independence while enabling efficient global pattern discovery with stride-level aggregation. Our method reduces complexity from $O(L^2)$ to $O(L^2/S^2)$ and employs adaptive Top-$τ$ selection for optimal sparsity. Extensive experiments on natural language understanding (HELMET) and multimodal video comprehension (Video-MME) demonstrate that RRAttention recovers over 99\% of full attention performance while computing only half of the attention blocks, achieving 2.4$\times$ speedup at 128K context length and outperforming existing dynamic sparse attention methods.

标签

attention mechanism sparse attention long context large language models

arXiv 分类

cs.CL