LLM Reasoning 相关度: 6/10

Preserving Continuous Symmetry in Discrete Spaces: Geometric-Aware Quantization for SO(3)-Equivariant GNNs

Haoyu Zhou, Ping Xue, Hao Zhang, Tianfan Fu
arXiv: 2603.05343v1 发布: 2026-03-05 更新: 2026-03-05

AI 摘要

提出几何感知量化框架GAQ,在保证SO(3)等变性的前提下,实现GNN模型压缩和加速。

主要贡献

  • Magnitude-Direction Decoupled Quantization (MDDQ)
  • Symmetry-aware training strategy
  • Robust attention normalization mechanism

方法论

通过解耦量化长度和方向,并结合对称感知训练和鲁棒注意力归一化,在低比特下保持等变性。

原文摘要

Equivariant Graph Neural Networks (GNNs) are essential for physically consistent molecular simulations but suffer from high computational costs and memory bottlenecks, especially with high-order representations. While low-bit quantization offers a solution, applying it naively to rotation-sensitive features destroys the SO(3)-equivariant structure, leading to significant errors and violations of conservation laws. To address this issue, in this work, we propose a Geometric-Aware Quantization (GAQ) framework that compresses and accelerates equivariant models while rigorously preserving continuous symmetry in discrete spaces. Our approach introduces three key contributions: (1) a Magnitude-Direction Decoupled Quantization (MDDQ) scheme that separates invariant lengths from equivariant orientations to maintain geometric fidelity; (2) a symmetry-aware training strategy that treats scalar and vector features with distinct quantization schedules; and (3) a robust attention normalization mechanism to stabilize gradients in low-bit regimes. Experiments on the rMD17 benchmark demonstrate that our W4A8 models match the accuracy of FP32 baselines (9.31 meV vs. 23.20 meV) while reducing Local Equivariance Error (LEE) by over 30x compared to naive quantization. On consumer hardware, GAQ achieves 2.39x inference speedup and 4x memory reduction, enabling stable, energy-conserving molecular dynamics simulations for nanosecond timescales.

标签

Equivariant GNNs Quantization Molecular Dynamics

arXiv 分类

cs.LG