InnerQ: Hardware-aware Tuning-free Quantization of KV Cache for Large Language Models
AI 摘要
InnerQ提出了一种硬件感知的KV缓存量化方案,旨在降低解码延迟并保持精度。
主要贡献
- 提出了InnerQ量化方案,通过内维度分组优化内存访问和加速反量化
- 结合混合量化、高精度窗口和逐通道归一化,保持量化后的模型精度
- 实验证明InnerQ在GSM8K上表现优于其他KV缓存量化方法
方法论
InnerQ采用内维度分组量化,结合混合量化、高精度窗口和逐通道归一化等技术,优化KV缓存量化。
原文摘要
Reducing the hardware footprint of large language models (LLMs) during decoding is critical for efficient long-sequence generation. A key bottleneck is the key-value (KV) cache, whose size scales with sequence length and easily dominates the memory footprint of the model. Previous work proposed quantization methods that are focused on compressing the KV cache while maintaining its information. We introduce InnerQ, a hardware-aware KV-cache quantization scheme that lowers decode latency without sacrificing accuracy. InnerQ applies group-wise quantization while grouping the cache matrices over their inner dimension. Unlike previous work that group over the outer dimension, InnerQ aligns dequantization with the vector-matrix multiplication and enables scale factor reuse across GPU compute units. This reduces memory accesses and accelerates dequantization, yielding up to $22\%$ speedup over previous work and up to $88\%$ over half-precision vector-matrix multiplication. To preserve fidelity under aggressive compression, InnerQ incorporates (i) hybrid quantization, selecting symmetric or asymmetric quantization per group based on local statistics; (ii) high-precision windows for both the most recent tokens and the attention sink tokens to mitigate outlier leakage; and (iii) per-channel normalization of the key cache, computed once during prefill and folded into the query to avoid runtime overhead. Our evaluation experiments on Llama models shows that InnerQ maintains a few-shot GSM8K performance comparable to non-quantized KV caches and surpasses prior KV cache quantization methods.