Agent Tuning & Optimization 相关度: 6/10

PolyFormer: learning efficient reformulations for scalable optimization under complex physical constraints

Yilin Wen, Yi Guo, Bo Zhao, Wei Qi, Zechun Hu, Colin Jones, Jian Sun
arXiv: 2603.08283v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

PolyFormer利用几何结构将复杂约束转化为高效多面体,实现可扩展的物理约束优化。

主要贡献

  • 提出PolyFormer框架,用于将复杂物理约束转化为高效多面体形式。
  • 通过多面体重构解耦问题复杂度和求解难度。
  • 在多个问题上实现了高达6400倍的计算加速和99.87%的内存减少。

方法论

PolyFormer学习约束背后的几何结构,将其转化为多面体表达,从而简化优化问题,并使用现有求解器求解。

原文摘要

Real-world optimization problems are often constrained by complex physical laws that limit computational scalability. These constraints are inherently tied to complex regions, and thus learning models that incorporate physical and geometric knowledge, i.e., physics-informed machine learning (PIML), offer a promising pathway for efficient solution. Here, we introduce PolyFormer, which opens a new direction for PIML in prescriptive optimization tasks, where physical and geometric knowledge is not merely used to regularize learning models, but to simplify the problems themselves. PolyFormer captures geometric structures behind constraints and transforms them into efficient polytopic reformulations, thereby decoupling problem complexity from solution difficulty and enabling off-the-shelf optimization solvers to efficiently produce feasible solutions with acceptable optimality loss. Through evaluations across three important problems (large-scale resource aggregation, network-constrained optimization, and optimization under uncertainty), PolyFormer achieves computational speedups up to 6,400-fold and memory reductions up to 99.87%, while maintaining solution quality competitive with or superior to state-of-the-art methods. These results demonstrate that PolyFormer provides an efficient and reliable solution for scalable constrained optimization, expanding the scope of PIML to prescriptive tasks in scientific discovery and engineering applications.

标签

物理约束优化 多面体重构 物理信息机器学习 PIML 大规模优化

arXiv 分类

cs.LG eess.SY math.OC