LLM Reasoning 相关度: 6/10

Unifying Optimization and Dynamics to Parallelize Sequential Computation: A Guide to Parallel Newton Methods for Breaking Sequential Bottlenecks

Xavier Gonzalez
arXiv: 2603.16850v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

该论文提出了一套稳定可扩展的并行牛顿法,用于解决序列计算的并行化难题,并提供了理论保证。

主要贡献

  • 开发了可扩展且稳定的并行牛顿法(包括拟牛顿法和信赖域法)
  • 将多种定点方法统一到并行牛顿框架中
  • 建立了线性收敛速率,并提出了并行加速的精确条件

方法论

将动态系统的评估转化为非线性方程组,利用并行牛顿法求解,并通过优化理论改进算法的效率和稳定性。

原文摘要

Massively parallel hardware (GPUs) and long sequence data have made parallel algorithms essential for machine learning at scale. Yet dynamical systems, like recurrent neural networks and Markov chain Monte Carlo, were thought to suffer from sequential bottlenecks. Recent work showed that dynamical systems can in fact be parallelized across the sequence length by reframing their evaluation as a system of nonlinear equations, which can be solved with Newton's method using a parallel associative scan. However, these parallel Newton methods struggled with limitations, primarily inefficiency, instability, and lack of convergence guarantees. This thesis addresses these limitations with methodological and theoretical contributions, drawing particularly from optimization. Methodologically, we develop scalable and stable parallel Newton methods, based on quasi-Newton and trust-region approaches. The quasi-Newton methods are faster and more memory efficient, while the trust-region approaches are significantly more stable. Theoretically, we unify many fixed-point methods into our parallel Newton framework, including Picard and Jacobi iterations. We establish a linear convergence rate for these techniques that depends on the method's approximation accuracy and stability. Moreover, we give a precise condition, rooted in dynamical stability, that characterizes when parallelization provably accelerates a dynamical system and when it cannot. Specifically, the sign of the Largest Lyapunov Exponent of a dynamical system determines whether or not parallel Newton methods converge quickly. In sum, this thesis unlocks scalable and stable methods for parallelizing sequential computation, and provides a firm theoretical basis for when such techniques will and will not work. This thesis also serves as a guide to parallel Newton methods for researchers who want to write the next chapter in this ongoing story.

标签

并行计算 牛顿法 动态系统 优化 大规模机器学习

arXiv 分类

math.NA cs.AI cs.DC math.DS math.OC