LLM Memory & RAG 相关度: 8/10

An Efficient Heterogeneous Co-Design for Fine-Tuning on a Single GPU

Ruijia Yang, Zeyi Wen
arXiv: 2603.16428v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

SlideFormer系统通过异构协同设计,实现了在单GPU上高效微调超大语言模型。

主要贡献

  • 轻量级异步引擎,重叠计算与I/O
  • 高效异构内存管理,降低峰值内存占用
  • 优化Triton内核,提升吞吐量

方法论

采用滑动窗口机制、异构内存管理和优化的Triton内核,实现CPU/GPU协同,优化I/O,降低内存需求。

原文摘要

Fine-tuning Large Language Models (LLMs) has become essential for domain adaptation, but its memory-intensive property exceeds the capabilities of most GPUs. To address this challenge and democratize LLM fine-tuning, we present SlideFormer, a novel system designed for single-GPU environments. Our innovations are: (1) A lightweight asynchronous engine that treats the GPU as a sliding window and overlaps GPU computation with CPU updates and multi-tier I/O. (2) A highly efficient heterogeneous memory management scheme significantly reduces peak memory usage. (3) Optimized Triton kernels to solve key bottlenecks and integrated advanced I/O. This collaborative design enables fine-tuning of the latest 123B+ models on a single RTX 4090, supporting up to 8x larger batch sizes and 6x larger models. In evaluations, SlideFormer achieves 1.40x to 6.27x higher throughput while roughly halving CPU/GPU memory usage compared to baselines, sustaining >95% peak performance on both NVIDIA and AMD GPUs.

标签

LLM Fine-tuning Single GPU Heterogeneous Computing Memory Optimization

arXiv 分类

cs.DC cs.AI