Multimodal Learning 相关度: 6/10

Why Gaussian Diffusion Models Fail on Discrete Data?

Alexander Shabalin, Simon Elistratov, Viacheslav Meshchaninov, Ildus Sadrtdinov, Dmitry Vetrov
arXiv: 2604.02028v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

研究高斯扩散模型在离散数据生成上失效的原因,并提出了缓解方法。

主要贡献

  • 发现了DDPM在离散数据上采样的关键问题:噪声数据密度多峰
  • 提出了q-sampling方法缓解该问题
  • 验证了self-conditioning和q-sampling的组合效果

方法论

使用随机层级模型进行实验,分析噪声数据密度,并结合self-conditioning和q-sampling进行改进。

原文摘要

Diffusion models have become a standard approach for generative modeling in continuous domains, yet their application to discrete data remains challenging. We investigate why Gaussian diffusion models with the DDPM solver struggle to sample from discrete distributions that are represented as a mixture of delta-distributions in the continuous space. Using a toy Random Hierarchy Model, we identify a critical sampling interval in which the density of noisified data becomes multimodal. In this regime, DDPM occasionally enters low-density regions between modes producing out-of-distribution inputs for the model and degrading sample quality. We show that existing heuristics, including self-conditioning and a solver we term q-sampling, help alleviate this issue. Furthermore, we demonstrate that combining self-conditioning with switching from DDPM to q-sampling within the critical interval improves generation quality on real data. We validate these findings across conditional and unconditional tasks in multiple domains, including text, programming code, and proteins.

标签

Diffusion Models Discrete Data Generative Models DDPM

arXiv 分类

cs.CL