GradMAP: Faster Layer Pruning with Gradient Metric and Projection Compensation
AI 摘要
提出GradMAP方法,通过梯度度量和投影补偿加速LLM层剪枝,提升剪枝速度和性能。
主要贡献
- 提出基于梯度幅值的层重要性度量方法,提高剪枝效率
- 提出投影补偿矩阵,减轻剪枝带来的模型性能下降
- 实验证明GradMAP在剪枝速度和性能上优于现有方法
方法论
通过单次反向传播计算梯度,评估层重要性;然后利用投影补偿矩阵校正剪枝带来的偏差,实现快速剪枝。
原文摘要
Large Language Models (LLMs) exhibit strong reasoning abilities, but their high computational costs limit their practical deployment. Recent studies reveal significant redundancy in LLMs layers, making layer pruning an active research topic. Layer pruning research primarily focuses on two aspects: measuring layer importance and recovering performance after pruning. Unfortunately, the present works fail to simultaneously maintain pruning performance and efficiency. In this study, we propose GradMAP, a faster layer pruning method with \textbf{Grad}ient \textbf{M}etric \textbf{A}nd \textbf{P}rojection compensation, which consists of two stages. In the first stage, we introduce a novel metric based on gradient magnitudes, enabling a global assessment of layer importance. Note that, it requires only a single backward propagation step per pruning decision, substantially enhancing pruning efficiency. In the second stage, we first analyze the layers with the largest mean shift resulting from pruning, and then incorporate a simple yet effective projection compensation matrix to correct this drift in one step. In this way, the degradation of model performance caused by layer pruning is effectively alleviated. Extensive experiments show that GradMAP outperforms previous layer pruning methods in both pruning speed (achieving an average $4\times$ speedup) and performance.