AI Agents 相关度: 6/10

LoRDO: Distributed Low-Rank Optimization with Infrequent Communication

Andrej Jovanović, Alex Iacob, Mher Safaryan, Ionut-Vlad Modoranu, Lorenzo Sani, William F. Shen, Xinchi Qiu, Dan Alistarh, Nicholas D. Lane
arXiv: 2602.04396v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

LoRDO通过低秩优化和稀疏通信,降低分布式训练中带宽和内存瓶颈,提高训练效率。

主要贡献

  • 提出LoRDO框架,结合低秩优化与稀疏同步
  • 引入全秩准双曲更新,恢复子空间探索
  • 实验证明LoRDO在语言建模和下游任务中具有竞争力,并显著减少通信量

方法论

LoRDO使用伪梯度进行全局低秩投影,并引入全秩准双曲更新来平衡优化轨迹与子空间探索。

原文摘要

Distributed training of foundation models via $\texttt{DDP}$ is limited by interconnect bandwidth. While infrequent communication strategies reduce synchronization frequency, they remain bottlenecked by the memory and communication requirements of optimizer states. Low-rank optimizers can alleviate these constraints; however, in the local-update regime, workers lack access to the full-batch gradients required to compute low-rank projections, which degrades performance. We propose $\texttt{LoRDO}$, a principled framework unifying low-rank optimization with infrequent synchronization. We first demonstrate that, while global projections based on pseudo-gradients are theoretically superior, they permanently restrict the optimization trajectory to a low-rank subspace. To restore subspace exploration, we introduce a full-rank quasi-hyperbolic update. $\texttt{LoRDO}$ achieves near-parity with low-rank $\texttt{DDP}$ in language modeling and downstream tasks at model scales of $125$M--$720$M, while reducing communication by $\approx 10 \times$. Finally, we show that $\texttt{LoRDO}$ improves performance even more in very low-memory settings with small rank/batch size.

标签

分布式训练 低秩优化 稀疏通信 语言模型

arXiv 分类

cs.LG cs.AI