LLM Reasoning 相关度: 8/10

Bielik-Q2-Sharp: A Comparative Study of Extreme 2-bit Quantization Methods for a Polish 11B Language Model

Jakub Prejzner
arXiv: 2603.04162v1 发布: 2026-03-04 更新: 2026-03-04

AI 摘要

对波兰语11B模型进行极端的2-bit量化方法比较,并公开模型和数据。

主要贡献

  • 首次系统评估波兰语LLM的2-bit量化
  • 比较了六种先进的后训练量化方法
  • 发现旋转方法在生成任务中存在失效现象

方法论

对Bielik-11B模型进行后训练量化,使用CulturaX-PL语料库校准,并评估在多个波兰语基准上的性能。

原文摘要

We present Bielik-Q2-Sharp, the first systematic academic evaluation of extreme 2-bit quantization applied to a Polish large language model. Using Bielik-11B-v2.3-Instruct (11B parameters, Mistral architecture) as our base model, we compare six state-of-the-art post-training quantization methods -- QuIP#, SpinQuant+GPTQ, ButterflyQuant, QTIP, VPTQ, and AQLM -- all calibrated on a Polish-language corpus (CulturaX-PL) with shared Hessian matrices. Our best variant (QuIP# E8P12) achieves 71.92% across 22 Polish benchmarks versus 72.07% for the IQ2_XXS baseline -- within statistical noise, at a modest size premium (3.26 GB vs. ~2.6 GB). On eq_bench, our method scores 47.14 versus 43.53 (+3.6pp), suggesting superior preservation of higher-order reasoning. QTIP achieves the best per-bit efficiency (79.4% MC acc_norm at ~2.4 bpw, 3.27 GB), matching VPTQ's quality at 35% smaller size. We additionally document a MC-generation dissociation phenomenon where rotation-based methods preserve log-likelihood quality but fail catastrophically at autoregressive generation. The entire project was conducted by a single independent researcher on cloud GPUs (vast.ai) within a $285 budget. All models, Hessians, and evaluation logs are publicly available.

标签

量化 LLM 波兰语 模型压缩

arXiv 分类

cs.CL cs.AI