Agent Tuning & Optimization 相关度: 7/10

Powering Up Zeroth-Order Training via Subspace Gradient Orthogonalization

Yicheng Lang, Changsheng Wang, Yihua Zhang, Mingyi Hong, Zheng Zhang, Wotao Yin, Sijia Liu
arXiv: 2602.17155v1 发布: 2026-02-19 更新: 2026-02-19

AI 摘要

提出ZO-Muon方法,通过子空间梯度正交化,显著提升零阶优化在微调大型模型时的效率和精度。

主要贡献

  • 提出子空间梯度正交化框架
  • 设计了ZO-Muon算法,结合了低秩结构和梯度正交化
  • 实验证明ZO-Muon在LLM和ViT微调中优于现有方法

方法论

利用投影的子空间视角降低梯度估计方差,并使用Muon风格的光谱优化提取噪声ZO梯度中的信息。

原文摘要

Zeroth-order (ZO) optimization provides a gradient-free alternative to first-order (FO) methods by estimating gradients via finite differences of function evaluations, and has recently emerged as a memory-efficient paradigm for fine-tuning large-scale models by avoiding backpropagation. However, ZO optimization has a fundamental tension between accuracy and query efficiency. In this work, we show that ZO optimization can be substantially improved by unifying two complementary principles: (i) a projection-based subspace view that reduces gradient estimation variance by exploiting the intrinsic low-rank structure of model updates, and (ii) Muon-style spectral optimization that applies gradient orthogonalization to extract informative spectral structure from noisy ZO gradients. These findings form a unified framework of subspace gradient orthogonalization, which we instantiate in a new method, ZO-Muon, admitting a natural interpretation as a low-rank Muon optimizer in the ZO setting. Extensive experiments on large language models (LLMs) and vision transformers (ViTs) demonstrate that ZO-Muon significantly accelerates convergence and achieves a win-win improvement in accuracy and query/runtime efficiency. Notably, compared to the popular MeZO baseline, ZO-Muon requires only 24.7% of the queries to reach the same SST-2 performance for LLM fine-tuning, and improves accuracy by 25.1% on ViT-B fine-tuning on CIFAR-100.

标签

零阶优化 梯度估计 大型模型微调 子空间学习 梯度正交化

arXiv 分类

cs.LG