BATQuant: Outlier-resilient MXFP4 Quantization via Learnable Block-wise Optimization
AI 摘要
BATQuant通过块级优化实现对MXFP4量化的鲁棒性,显著提升MLLM/LLM性能。
主要贡献
- 提出块级仿射变换,防止跨块异常值传播
- 引入全局和私有Kronecker分解,降低存储和运行时开销
- 加入块级可学习剪裁,抑制残余异常值
方法论
采用块级仿射变换限制异常值传播,并用Kronecker分解和可学习剪裁优化量化。
原文摘要
Microscaling floating-point (MXFP) formats have emerged as a promising standard for deploying Multi-modal Large Language Models (MLLMs) and Large Language Models (LLMs) on modern accelerator architectures. However, existing Post-Training Quantization (PTQ) methods, particularly rotation-based techniques designed for integer formats, suffer from severe performance collapse when applied to MXFP4. Recent studies attribute this failure to a fundamental format mismatch: global orthogonal rotations inadvertently transfer outlier energy across quantization blocks, inducing new outliers that disrupt local block-wise scaling, while often creating bimodal activation distributions that underutilize the limited quantization range. To address these issues, we propose BATQuant (Block-wise Affine Transformation), which restricts transformations to align with MXFP granularity to prevent cross-block outlier propagation, while relaxing orthogonality constraints to optimize distribution shaping. To ensure parameter efficiency, we introduce Global and Private Kronecker (GPK) decomposition to effectively reduces storage and runtime overhead and incorporate Block-wise Learnable Clipping to suppress residual outliers. Extensive experiments on both MLLMs and LLMs demonstrate that BATQuant establishes new state-of-the-art results under aggressive W4A4KV16 configurations, recovering up to 96.43% of full-precision performance on multimodal benchmarks and clearly outperforming existing methods across diverse tasks.