LLM Reasoning 相关度: 7/10

Adaptation to Intrinsic Dependence in Diffusion Language Models

Yunxiao Zhao, Changxiao Cai
arXiv: 2602.20126v1 发布: 2026-02-23 更新: 2026-02-23

AI 摘要

论文提出了一种分布无关的DLM解掩码策略,自适应数据依赖结构,加速采样过程。

主要贡献

  • 提出了一种自适应于数据依赖结构的DLM解掩码策略
  • 证明了该策略在采样收敛性上的理论保证,优于现有方法
  • 揭示了DLM对内在数据结构的适应性,并强调了随机解掩码在推理设计中的优势

方法论

通过随机化每次迭代中揭示的token数量,使解掩码策略适应目标数据分布的依赖结构。

原文摘要

Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) approaches, enabling parallel token generation beyond a rigid left-to-right order. Despite growing empirical success, the theoretical understanding of how unmasking schedules -- which specify the order and size of unmasked tokens during sampling -- affect generation quality remains limited. In this work, we introduce a distribution-agnostic unmasking schedule for DLMs that adapts to the (unknown) dependence structure of the target data distribution, without requiring any prior knowledge or hyperparameter tuning. In contrast to prior deterministic procedures that fix unmasking sizes, our method randomizes the number of tokens revealed at each iteration. We show that, for two specific parameter choices, the sampling convergence guarantees -- measured by Kullback-Leibler (KL) divergence -- scale as $\widetilde O(\mathsf{TC}/K)$ and $\widetilde O(\mathsf{DTC}/K)$ respectively. Here, $K$ is the number of iterations, and $\mathsf{TC}$ and $\mathsf{DTC}$ are the total correlation and dual total correlation of the target distribution, capturing the intrinsic dependence structure underlying the data. Importantly, our guarantees hold in the practically relevant parallel-sampling regime $K<L$ where $L$ is the token sequence length. These results significantly improve upon prior convergence theories and yield substantial sampling acceleration for low-complexity distributions. Overall, our findings unveil the adaptivity of DLMs to intrinsic data structures and shed light on the benefit of randomized unmasking sizes in inference schedule design.

标签

Diffusion Language Models Unmasking Schedule Sampling Convergence Total Correlation

arXiv 分类

cs.LG cs.IT math.ST stat.ML