LLM Reasoning 相关度: 9/10

Beyond Tokens: Semantic-Aware Speculative Decoding for Efficient Inference by Probing Internal States

Ximing Dong, Shaowei Wang, Dayi Lin, Boyuan Chen, Ahmed E. Hassan
arXiv: 2602.03708v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

SemanticSpec通过语义感知的推测解码,提升LLM推理效率,尤其在长链推理中表现突出。

主要贡献

  • 提出语义感知的推测解码框架SemanticSpec
  • 引入语义概率估计机制,利用内部隐状态评估语义序列的可能性
  • 实验证明在多个基准测试上优于传统方法

方法论

通过探查模型内部隐状态,评估生成具有特定语义序列的可能性,从而实现更高效的推测解码。

原文摘要

Large Language Models (LLMs) achieve strong performance across many tasks but suffer from high inference latency due to autoregressive decoding. The issue is exacerbated in Large Reasoning Models (LRMs), which generate lengthy chains of thought. While speculative decoding accelerates inference by drafting and verifying multiple tokens in parallel, existing methods operate at the token level and ignore semantic equivalence (i.e., different token sequences expressing the same meaning), leading to inefficient rejections. We propose SemanticSpec, a semantic-aware speculative decoding framework that verifies entire semantic sequences instead of tokens. SemanticSpec introduces a semantic probability estimation mechanism that probes the model's internal hidden states to assess the likelihood of generating sequences with specific meanings.Experiments on four benchmarks show that SemanticSpec achieves up to 2.7x speedup on DeepSeekR1-32B and 2.1x on QwQ-32B, consistently outperforming token-level and sequence-level baselines in both efficiency and effectiveness.

标签

LLM 推测解码 语义理解 推理效率 内部状态

arXiv 分类

cs.CL cs.PF