LLM Reasoning 相关度: 7/10

Ultra Fast PDE Solving via Physics Guided Few-step Diffusion

Cindy Xiangrui Kong, Yueqi Wang, Haoyang Zheng, Weijian Luo, Guang Lin
arXiv: 2602.03627v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

Phys-Instruct通过物理引导的蒸馏,加速扩散模型求解偏微分方程,并提升物理一致性。

主要贡献

  • 提出Phys-Instruct框架,加速PDE求解。
  • 通过PDE知识蒸馏,增强物理一致性。
  • 实现比现有扩散模型快几个数量级的推理速度,并降低PDE误差。

方法论

通过匹配生成器和先验扩散分布,将预训练扩散模型压缩为少步生成器,并注入PDE知识。

原文摘要

Diffusion-based models have demonstrated impressive accuracy and generalization in solving partial differential equations (PDEs). However, they still face significant limitations, such as high sampling costs and insufficient physical consistency, stemming from their many-step iterative sampling mechanism and lack of explicit physics constraints. To address these issues, we propose Phys-Instruct, a novel physics-guided distillation framework which not only (1) compresses a pre-trained diffusion PDE solver into a few-step generator via matching generator and prior diffusion distributions to enable rapid sampling, but also (2) enhances the physics consistency by explicitly injecting PDE knowledge through a PDE distillation guidance. Physic-Instruct is built upon a solid theoretical foundation, leading to a practical physics-constrained training objective that admits tractable gradients. Across five PDE benchmarks, Phys-Instruct achieves orders-of-magnitude faster inference while reducing PDE error by more than 8 times compared to state-of-the-art diffusion baselines. Moreover, the resulting unconditional student model functions as a compact prior, enabling efficient and physically consistent inference for various downstream conditional tasks. Our results indicate that Phys-Instruct is a novel, effective, and efficient framework for ultra-fast PDE solving powered by deep generative models.

标签

扩散模型 偏微分方程 知识蒸馏 物理约束

arXiv 分类

cs.LG