LLM Reasoning 相关度: 7/10

GlowQ: Group-Shared LOw-Rank Approximation for Quantized LLMs

Selim An, Il hong Suh, Yeseong Kim
arXiv: 2603.25385v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

GlowQ通过组共享低秩近似优化量化LLM,提升推理速度和精度。

主要贡献

  • 提出组共享低秩近似方法GlowQ
  • 提出选择性GlowQ-S,进一步优化延迟
  • 实验证明GlowQ在加速和精度上优于现有方法

方法论

利用组共享的低秩矩阵修正量化误差,并设计选择机制,仅在收益最大处应用修正。

原文摘要

Quantization techniques such as BitsAndBytes, AWQ, and GPTQ are widely used as a standard method in deploying large language models but often degrades accuracy when using low-bit representations, e.g., 4 bits. Low-rank correction methods (e.g., LQER, QERA, ASER) has been proposed to mitigate this issue, however, they restore all layers and insert error-correction modules into every decoder block, which increases latency and memory overhead. To address this limitation, we propose GlowQ, a group-shared low-rank approximation for quantized LLMs that caches a single shared right factor per input-sharing group and restores only the groups or layers that yield the highest accuracy benefit. GlowQ computes the high-precision projection once per input-sharing group and reuses it across its modules, reducing parameter and memory overhead, and retaining the expressivity of layer-specific corrections. We also propose a selective variant, GlowQ-S, that applies the cached shared module only where it provides the largest benefit. Compared with strong baselines, our approach reduces TTFB by (5.6%) and increases throughput by (9.6%) on average, while reducing perplexity on WikiText-2 by (0.17%) and increasing downstream accuracy by 0.42 percentage points. The selective model GlowQ-S further reduces latency, cutting TTFB by (23.4%) and increasing throughput by (37.4%), while maintaining accuracy within 0.2 percentage points on average.

标签

量化 低秩近似 模型压缩 大语言模型

arXiv 分类

cs.LG cs.AI