LLM Reasoning 相关度: 9/10

Near-Oracle KV Selection via Pre-hoc Sparsity for Long-Context Inference

Yifei Gao, Lei Wang, Rong-Cheng Tu, Qixin Zhang, Jun Cheng, Dacheng Tao
arXiv: 2602.08329v1 发布: 2026-02-09 更新: 2026-02-09

AI 摘要

提出Pre-hoc Sparsity方法,解决长文本推理中KV选择的后验偏差问题,提升推理效率和准确性。

主要贡献

  • 提出了Pre-hoc Sparsity (PrHS) 方法
  • 推导了互信息损失的上界,实现了显式的精度控制
  • 设计了三种正交的预先选择器:时间、深度和层

方法论

通过互信息分析,推导出丢弃token的注意力质量损失上界,并在token重要性评分前进行KV选择。

原文摘要

A core bottleneck in large language model (LLM) inference is the cost of attending over the ever-growing key-value (KV) cache. Although near-oracle top-k KV selection can preserve the quality of dense attention while sharply reducing computation and bandwidth, existing sparse methods generally rely on posterior heuristics, i.e., selectors conditioned on observed attention or proxy scores. Such conditioning introduces posterior bias: it tends to distort true token importance and miss salient tokens, thereby impairing long-range reasoning. To tackle this problem, we propose Pre-hoc Sparsity (PrHS), which selects KV entries before attention scoring and provides explicit accuracy control. Let the attention mass of discarded entries be delta (the dropped mass). Through a marginal-to-mutual-information analysis, we derive an upper bound on the mutual-information loss that depends only on the dropped mass. This relation explains failure modes of posterior heuristics and enables verifiable guarantees by controlling the dropped mass in advance. Within PrHS, we instantiate three orthogonal pre-hoc selectors along the axes of time, depth, and layer. Extensive experiments on LLaMA and Mistral families validate PrHS. Across GSM8K and CoQA, PrHS reduces retrieval overhead by over 90%, achieving 3x higher retrieval sparsity than HShare at matched or better accuracy. It incurs under 1% average degradation on LongBench, lowers attention FLOPs by about 15% versus prior sparse baselines, and yields a 9.9x speedup in attention-operator latency and 2.8x higher throughput on NVIDIA A100-80GB GPUs than the dense baseline.

标签

长文本推理 稀疏注意力 KV缓存 语言模型优化

arXiv 分类

cs.LG cs.AI cs.IT