LLM Memory & RAG 相关度: 8/10

Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking

Ravi Ghadia, Maksim Abraham, Sergei Vorobyov, Max Ryabinin
arXiv: 2602.21196v1 发布: 2026-02-24 更新: 2026-02-24

AI 摘要

UPipe通过头级别分块实现高效上下文并行,显著降低Transformer的激活内存占用,支持更长上下文。

主要贡献

  • 提出了UPipe上下文并行技术
  • 在头级别进行细粒度分块,显著降低激活内存
  • 在训练速度上与现有技术相当,并支持更长上下文

方法论

UPipe在attention head级别进行细粒度分块,减少自注意力层的中间张量内存使用,从而突破激活内存瓶颈。

原文摘要

Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5$\%$ for 32B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support the context length of 5M tokens when training Llama3-8B on a single 8$\times$H100 node, improving upon prior methods by over 25$\%$.

标签

Transformer Context Parallelism Memory Efficiency Attention Distributed Training

arXiv 分类

cs.LG cs.DC