Determining Energy Efficiency Sweet Spots in Production LLM Inference
AI 摘要
该论文分析了LLM推理中的能源效率,发现存在最佳效率区间,并提出了一个预测能源效率的模型。
主要贡献
- 发现LLM推理存在能源效率最佳区间
- 提出基于Transformer架构的能源效率预测模型
- 在多种LLM和GPU上验证了模型的准确性
方法论
通过分析Transformer架构的计算和内存访问复杂度,建立解析模型,并通过实验验证模型准确性。
原文摘要
Large Language Models (LLMs) inference is central in modern AI applications, making it critical to understand their energy footprint. Existing approaches typically estimate energy consumption through simple linear functions of input and output sequence lengths, yet our observations reveal clear Energy Efficiency regimes: peak efficiency occurs with short-to-moderate inputs and medium-length outputs, while efficiency drops sharply for long inputs or very short outputs, indicating a non-linear dependency. In this work, we propose an analytical model derived from the computational and memory-access complexity of the Transformer architecture, capable of accurately characterizing the efficiency curve as a function of input and output lengths. To assess its accuracy, we evaluate energy consumption using TensorRT-LLM on NVIDIA H100 GPUs across a diverse set of LLMs ranging from 1B to 9B parameters, including OPT, LLaMA, Gemma, Falcon, Qwen2, and Granite, tested over input and output lengths from 64 to 4096 tokens, achieving a mean MAPE of 1.79%. Our results show that aligning sequence lengths with these efficiency "Sweet Spots" can substantially reduce energy usage, supporting informed truncation, summarization, and adaptive generation strategies in production systems.