Balancing Coverage and Draft Latency in Vocabulary Trimming for Faster Speculative Decoding
AI 摘要
论文提出通过词汇表裁剪来平衡覆盖率和延迟,从而加速推测解码。
主要贡献
- 提出词汇表裁剪方法加速推测解码
- 将词汇表选择建模为约束优化问题
- 使用Tree-structured Parzen Estimator探索覆盖率-延迟帕累托前沿
方法论
将词汇表选择视为约束优化,通过训练数据计算覆盖率,使用FLOPs估计延迟,并用TPE优化覆盖率-延迟。
原文摘要
Speculative decoding accelerates inference for Large Language Models by using a lightweight draft model to propose candidate tokens that are verified in parallel by a larger target model. Prior work shows that the draft model often dominates speculative decoding latency, since it generates tokens sequentially and incurs high cost from its language modeling head as vocabulary size grows. This exposes a fundamental trade-off in draft model design: larger vocabularies improve token coverage and agreement with the target model, but incur higher draft latency, while smaller vocabularies reduce latency at the risk of missing tokens required for accurate draft generation. We address this trade-off through vocabulary trimming for draft models, motivated by the observation that domain-specific workloads use only a small fraction of the full vocabulary. We cast draft vocabulary selection as a constrained optimization problem that balances token coverage and draft latency. Coverage is computed over assistant responses in the training data, while latency is estimated using architecture-aware FLOPs that capture the cost of the language modeling head as a function of vocabulary size. We optimize a utility function with a Tree-structured Parzen Estimator to efficiently explore the coverage-latency Pareto frontier under a minimum coverage constraint. Experiments show improved speculative decoding throughput while reducing draft vocabularies by up to 97% with high coverage. On domain-specific tasks, we achieve up to 16% latency reduction and 20% throughput improvement, and up to 6.7% throughput gains on diverse out-of-distribution tasks.