LLM Reasoning 相关度: 9/10

High-Fidelity Pruning for Large Language Models

Yijun Zhu, Jianxin Wang, Chengchao Shen
arXiv: 2603.08083v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

提出了一种基于信息熵的Taylor剪枝方法,提升大语言模型剪枝后的性能,无需额外教师模型。

主要贡献

  • 提出了基于信息熵的Taylor剪枝准则,无需额外教师模型。
  • 该方法能更全面地评估神经元的重要性,提升剪枝后模型的预测能力。
  • 实验证明,该方法在LLaMA和Qwen系列模型上优于现有剪枝方法。

方法论

使用模型输出分布的信息熵作为Taylor剪枝的评价标准,用于评估神经元的重要性,降低对模型预测能力的影响。

原文摘要

Large Language Models (LLMs) have demonstrated exceptional performance across a wide range of tasks, yet their significant computational and memory requirements present major challenges for deployment. A common approach uses Taylor expansion on the loss function to estimate neuron importance. However, its reliance on one-hot cross entropy loss, a key limitation is that it narrowly assesses importance based only on the probability assigned to the single predicted next token, thereby ignoring the other potential predictions of the original model. An intuitive solution to address this is to employ self distillation criterion for importance evaluation. However, this approach introduces significant computational overhead by requiring a separate teacher model for supervision. To this end, we propose a simple but effective criterion, information entropy of the model's output distribution, to efficiently evaluate importance scores of neurons with Taylor pruning without requirement of additional teacher. Compared to plain cross entropy criterion, it provides a more holistic criterion for Taylor pruning to prune neurons with the least impact on the prediction of model in a global manner, thereby preserving the fidelity of the model's predictive capabilities. Experimental results on extensive zero-shot benchmarks demonstrate that our method consistently outperforms existing pruning methods across the LLaMA and Qwen series models. The source code and trained weights are availabel at https://github.com/visresearch/HFPrune.

标签

Large Language Model Pruning Information Entropy Taylor Expansion Model Compression

arXiv 分类

cs.CL