LLM Reasoning 相关度: 7/10

GPU-Accelerated Optimization of Transformer-Based Neural Networks for Real-Time Inference

Soutrik Mukherjee, Sangwhan Cha
arXiv: 2603.28708v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

利用GPU加速和混合精度优化Transformer模型,实现实时推理并降低内存占用。

主要贡献

  • 设计并评估了基于NVIDIA TensorRT的GPU加速推理pipeline
  • 提出了混合精度策略,平衡性能和数值精度
  • 提供了Transformer模型在不同GPU架构上的性能和精度权衡分析

方法论

使用NVIDIA TensorRT,针对BERT和GPT-2模型,进行混合精度优化,并在不同batch size和sequence length下进行实验评估。

原文摘要

This paper presents the design and evaluation of a GPU-accelerated inference pipeline for transformer models using NVIDIA TensorRT with mixed-precision optimization. We evaluate BERT-base (110M parameters) and GPT-2 (124M parameters) across batch sizes from 1 to 32 and sequence lengths from 32 to 512. The system achieves up to 64.4x speedup over CPU baselines, sub-10 ms latency for single-sample inference, and a 63 percent reduction in memory usage. We introduce a hybrid precision strategy that preserves FP32 for numerically sensitive operations such as softmax and layer normalization, while applying FP16 to linear layers. This approach maintains high numerical fidelity (cosine similarity >= 0.9998 relative to baseline outputs) and eliminates NaN instability. The pipeline is implemented as a modular, containerized system that enables reproducible benchmarking across more than 360 configurations. Cross-GPU validation on an NVIDIA A100 shows consistent FP16 speedup ratios between 1.84x and 2.00x, along with stable numerical behavior. Downstream evaluation on SST-2 demonstrates no accuracy degradation under hybrid precision. Validation on WikiText-2 shows that random inputs underestimate NaN instability by up to 6x for full FP16, while confirming the robustness of the hybrid approach (0.0 percent NaN, cosine similarity >= 0.9998). These results provide a detailed characterization of performance and accuracy trade-offs across GPU architectures and offer practical guidance for deploying transformer models in latency-critical environments.

标签

Transformer GPU Acceleration Inference Optimization TensorRT Mixed-Precision

arXiv 分类

cs.LG cs.DC