Fine-Grained Post-Training Quantization for Large Vision Language Models with Quantization-Aware Integrated Gradients
AI 摘要
该论文提出了一种细粒度后训练量化方法,利用量化感知集成梯度提升LVLM量化性能。
主要贡献
- 提出基于量化感知集成梯度的细粒度量化策略
- 将量化粒度从模态级别提升到token级别
- 在多个LVLM上验证了方法的有效性,提升了模型精度
方法论
利用集成梯度量化评估token敏感度,进行细粒度的模型量化,优化跨模态和模态内的动态。
原文摘要
Large Vision Language Models (LVLMs) have achieved remarkable success in a range of downstream tasks that require multimodal interaction, but their capabilities come with substantial computational and memory overhead, which hinders practical deployment. Among numerous acceleration techniques, post-training quantization is a popular and effective strategy for reducing memory cost and accelerating inference. However, existing LVLM quantization methods typically measure token sensitivity at the modality level, which fails to capture the complex cross-token interactions and falls short in quantitatively measuring the quantization error at the token level. As tokens interact within the model, the distinction between modalities gradually diminishes, suggesting the need for fine-grained calibration. Inspired by axiomatic attribution in mechanistic interpretability, we introduce a fine-grained quantization strategy on Quantization-aware Integrated Gradients (QIG), which leverages integrated gradients to quantitatively evaluate token sensitivity and push the granularity from modality level to token level, reflecting both inter-modality and intra-modality dynamics. Extensive experiments on multiple LVLMs under both W4A8 and W3A16 settings show that our method improves accuracy across models and benchmarks with negligible latency overhead. For example, under 3-bit weight-only quantization, our method improves the average accuracy of LLaVA-onevision-7B by 1.60%, reducing the gap to its full-precision counterpart to only 1.33%. The code is available at https://github.com/ucas-xiang/QIG.