LLM Reasoning 相关度: 8/10

Sink-Aware Pruning for Diffusion Language Models

Aidar Myrzakhan, Tianyi Li, Bowei Guo, Shengkun Tang, Zhiqiang Shen
arXiv: 2602.17664v1 发布: 2026-02-19 更新: 2026-02-19

AI 摘要

针对扩散语言模型,提出了一种能够识别并剪枝不稳定注意力汇聚点的Sink-Aware剪枝方法,提升了推理效率。

主要贡献

  • 发现了扩散语言模型中注意力汇聚点的不稳定性,不同于自回归模型。
  • 提出了Sink-Aware Pruning方法,自动识别并剪枝不稳定的注意力汇聚点。
  • 实验表明,该方法在计算资源相当的情况下优于现有的剪枝基线。

方法论

通过分析扩散语言模型中注意力汇聚点的方差,识别不稳定的汇聚点,并采用剪枝策略去除这些点,以达到高效推理的目的。

原文摘要

Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose ${\bf \texttt{Sink-Aware Pruning}}$, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.

标签

Diffusion Language Model Pruning Attention Sink Model Compression

arXiv 分类

cs.CL cs.AI cs.LG