DFlash: Block Diffusion for Flash Speculative Decoding
AI 摘要
DFlash提出了一种基于扩散模型的推测解码框架,显著加速LLM的推理过程。
主要贡献
- 提出DFlash框架,利用扩散模型并行生成草稿token
- 将目标模型上下文特征融入草稿模型,提高草稿质量
- 实验证明DFlash在多种任务上实现了显著的加速效果
方法论
利用轻量级块扩散模型并行生成草稿token,并使用目标模型的上下文特征指导生成,实现高效的推测解码。
原文摘要
Autoregressive large language models (LLMs) deliver strong performance but require inherently sequential decoding, leading to high inference latency and poor GPU utilization. Speculative decoding mitigates this bottleneck by using a fast draft model whose outputs are verified in parallel by the target LLM; however, existing methods still rely on autoregressive drafting, which remains sequential and limits practical speedups. Diffusion LLMs offer a promising alternative by enabling parallel generation, but current diffusion models typically underperform compared with autoregressive models. In this paper, we introduce DFlash, a speculative decoding framework that employs a lightweight block diffusion model for parallel drafting. By generating draft tokens in a single forward pass and conditioning the draft model on context features extracted from the target model, DFlash enables efficient drafting with high-quality outputs and higher acceptance rates. Experiments show that DFlash achieves over 6x lossless acceleration across a range of models and tasks, delivering up to 2.5x higher speedup than the state-of-the-art speculative decoding method EAGLE-3.