LLM Memory & RAG 相关度: 8/10

POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

Zeju Qiu, Lixin Liu, Adrian Weller, Han Shi, Weiyang Liu
arXiv: 2603.05500v1 发布: 2026-03-05 更新: 2026-03-05

AI 摘要

POET-X通过改进正交等价变换,降低了LLM训练的内存消耗和计算开销。

主要贡献

  • 提出了POET-X算法,降低内存占用
  • 提升了LLM训练的吞吐量
  • 实现了在单H100 GPU上预训练十亿参数LLM

方法论

POET-X是POET的改进版本,通过减少计算成本的正交等价变换,优化权重矩阵,保持稳定性和泛化能力。

原文摘要

Efficient and stable training of large language models (LLMs) remains a core challenge in modern machine learning systems. To address this challenge, Reparameterized Orthogonal Equivalence Training (POET), a spectrum-preserving framework that optimizes each weight matrix through orthogonal equivalence transformation, has been proposed. Although POET provides strong training stability, its original implementation incurs high memory consumption and computational overhead due to intensive matrix multiplications. To overcome these limitations, we introduce POET-X, a scalable and memory-efficient variant that performs orthogonal equivalence transformations with significantly reduced computational cost. POET-X maintains the generalization and stability benefits of POET while achieving substantial improvements in throughput and memory efficiency. In our experiments, POET-X enables the pretraining of billion-parameter LLMs on a single Nvidia H100 GPU, and in contrast, standard optimizers such as AdamW run out of memory under the same settings.

标签

LLM Training Memory Efficiency Orthogonal Transformation

arXiv 分类

cs.LG cs.AI cs.CL