Agent Tuning & Optimization 相关度: 6/10

Big2Small: A Unifying Neural Network Framework for Model Compression

Jing-Xiao Liao, Haoran Wang, Tao Li, Daoming Lyu, Yi Zhang, Chengjun Cai, Feng-Lei Fan
arXiv: 2603.29768v1 发布: 2026-03-31 更新: 2026-03-31

AI 摘要

提出Big2Small框架,通过隐式神经表示压缩模型,实现高效的模型压缩和推理。

主要贡献

  • 提出了一个统一的模型压缩数学框架
  • 提出了Big2Small数据无关模型压缩框架
  • 引入了Outlier-Aware Preprocessing和Frequency-Aware Loss

方法论

使用隐式神经表示(INR)编码大型模型的权重,并在推理时重建权重。通过预处理和损失函数提高重建质量。

原文摘要

With the development of foundational models, model compression has become a critical requirement. Various model compression approaches have been proposed such as low-rank decomposition, pruning, quantization, ergodic dynamic systems, and knowledge distillation, which are based on different heuristics. To elevate the field from fragmentation to a principled discipline, we construct a unifying mathematical framework for model compression grounded in measure theory. We further demonstrate that each model compression technique is mathematically equivalent to a neural network subject to a regularization. Building upon this mathematical and structural equivalence, we propose an experimentally-verified data-free model compression framework, termed \textit{Big2Small}, which translates Implicit Neural Representations (INRs) from data domain to the domain of network parameters. \textit{Big2Small} trains compact INRs to encode the weights of larger models and reconstruct the weights during inference. To enhance reconstruction fidelity, we introduce Outlier-Aware Preprocessing to handle extreme weight values and a Frequency-Aware Loss function to preserve high-frequency details. Experiments on image classification and segmentation demonstrate that \textit{Big2Small} achieves competitive accuracy and compression ratios compared to state-of-the-art baselines.

标签

模型压缩 隐式神经表示 神经网络 量化

arXiv 分类

cs.LG