WARP Logic Neural Networks
AI 摘要
WARP逻辑神经网络通过高效学习硬件原生逻辑块组合,降低训练成本,提高推理速度。
主要贡献
- 提出WARP逻辑神经网络框架
- 参数效率最高的布尔函数表示
- 引入可学习阈值和残差初始化
方法论
基于梯度学习,利用Walsh松弛学习概率逻辑,结合随机平滑实现离散逻辑推理。
原文摘要
Fast and efficient AI inference is increasingly important, and recent models that directly learn low-level logic operations have achieved state-of-the-art performance. However, existing logic neural networks incur high training costs, introduce redundancy or rely on approximate gradients, which limits scalability. To overcome these limitations, we introduce WAlsh Relaxation for Probabilistic (WARP) logic neural networks -- a novel gradient-based framework that efficiently learns combinations of hardware-native logic blocks. We show that WARP yields the most parameter-efficient representation for exactly learning Boolean functions and that several prior approaches arise as restricted special cases. Training is improved by introducing learnable thresholding and residual initialization, while we bridge the gap between relaxed training and discrete logic inference through stochastic smoothing. Experiments demonstrate faster convergence than state-of-the-art baselines, while scaling effectively to deeper architectures and logic functions with higher input arity.