LLM Reasoning 相关度: 7/10

Rejection Mixing: Fast Semantic Propagation of Mask Tokens for Efficient DLLM Inference

Yushi Ye, Feng Hong, Huangjie Zheng, Xu Chen, Zhiyong Chen, Yanfeng Wang, Jiangchao Yao
arXiv: 2602.22868v1 发布: 2026-02-26 更新: 2026-02-26

AI 摘要

ReMix通过连续空间优化缓解DLLM并行解码中的语义矛盾,显著提升推理速度。

主要贡献

  • 提出ReMix框架,集成连续表示到离散解码过程
  • 引入连续混合状态,迭代优化token表示
  • 提出拒绝规则,避免错误传播

方法论

ReMix在masked状态和解码token状态之间引入连续混合状态,并通过拒绝规则确保解码稳定性。

原文摘要

Diffusion Large Language Models (DLLMs) promise fast non-autoregressive inference but suffer a severe quality-speed trade-off in parallel decoding. This stems from the ''combinatorial contradiction'' phenomenon, where parallel tokens form semantically inconsistent combinations. We address this by integrating continuous representations into the discrete decoding process, as they preserve rich inter-position dependency. We propose ReMix (Rejection Mixing), a framework that introduces a novel Continuous Mixing State as an intermediate between the initial masked state and the final decoded token state. This intermediate state allows a token's representation to be iteratively refined in a continuous space, resolving mutual conflicts with other tokens before collapsing into a final discrete sample. Furthermore, a rejection rule reverts uncertain representations from the continuous state back to the masked state for reprocessing, ensuring stability and preventing error propagation. ReMix thus mitigates combinatorial contradictions by enabling continuous-space refinement during discrete diffusion decoding. Extensive experiments demonstrate that ReMix, as a training-free method, achieves a $2-8 \times$ inference speedup without any quality degradation.

标签

DLLM Diffusion Model Inference Speedup Semantic Consistency

arXiv 分类

cs.CL