LLM Reasoning 相关度: 8/10

SDFP: Speculative Decoding with FIT-Pruned Models for Training-Free and Plug-and-Play LLM Acceleration

Hanyu Wei, Zunhai Su, Peng Lu, Chao Li, Spandan Tiwari, Ashish Sirasao, Yuhan Dong
arXiv: 2602.05499v1 发布: 2026-02-05 更新: 2026-02-05

AI 摘要

SDFP提出了一种无需训练、即插即用的LLM加速框架,通过FIT剪枝构建draft模型。

主要贡献

  • 提出基于Fisher信息迹(FIT)的层剪枝方法
  • 构建无需训练的轻量级draft模型
  • 实现1.32x-1.5x的推理加速

方法论

利用FIT评估层敏感度,剪枝低影响层构建draft模型,与原模型进行投机解码验证,无需额外训练。

原文摘要

Large language models (LLMs) underpin interactive multimedia applications such as captioning, retrieval, recommendation, and creative content generation, yet their autoregressive decoding incurs substantial latency. Speculative decoding reduces latency using a lightweight draft model, but deployment is often limited by the cost and complexity of acquiring, tuning, and maintaining an effective draft model. Recent approaches usually require auxiliary training or specialization, and even training-free methods incur costly search or optimization. We propose SDFP, a fully training-free and plug-and-play framework that builds the draft model via Fisher Information Trace (FIT)-based layer pruning of a given LLM. Using layer sensitivity as a proxy for output perturbation, SDFP removes low-impact layers to obtain a compact draft while preserving compatibility with the original model for standard speculative verification. SDFP needs no additional training, hyperparameter tuning, or separately maintained drafts, enabling rapid, deployment-friendly draft construction. Across benchmarks, SDFP delivers 1.32x-1.5x decoding speedup without altering the target model's output distribution, supporting low-latency multimedia applications.

标签

大语言模型 投机解码 模型剪枝 模型加速

arXiv 分类

cs.AI