LLM Memory & RAG 相关度: 9/10

PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training

Yanyi Li, Yimu Zhang, Cong Fang
arXiv: 2602.23111v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

PRAC通过主成分-随机子空间分解激活,实现LLM训练中激活压缩和内存优化。

主要贡献

  • 提出PRAC激活压缩方法
  • 证明PRAC产生无偏梯度估计并最小化方差
  • 实验验证PRAC在LLM训练中有效

方法论

将激活分解为主成分子空间和随机子空间,利用SVD捕捉主要信息,并引入精确的缩放因子。

原文摘要

Activations have become the primary memory bottleneck in large-batch LLM training. However, existing compression methods fail to exploit the spectral structure of activations, resulting in slow convergence or limited compression. To address this, we bridge the relationship between the algorithm's fast convergence and the requirements for subspace projection, and show that an effective compression should yield an unbiased estimate of the original activation with low variance. We propose Principal-Random Subspace for LLM Activation Compression (PRAC), which novelly decomposes activations into two components: a principal subspace captured via SVD to retain dominant information, and a random subspace sampled from the orthogonal complement to approximate the tail. By introducing a precise scaling factor, we prove that PRAC yields an unbiased gradient estimator with minimum variance under certain conditions. Extensive experiments on pre-training and fine-tuning tasks demonstrate that PRAC achieves up to 36% total memory reduction with negligible performance degradation and minimal computational cost.

标签

LLM 激活压缩 内存优化 SVD 随机子空间

arXiv 分类

cs.LG